Wednesday, June 19, 2013

Vine Will Survive!

Instagram is planning to launch video functionality in two days. But don’t go deleting Vine just yet. Before shoving Vine’s into the deadpool, let’s just calm it down a second.
Vine has been declared by many as the “Instagram for Video.” Instagram’s own video product is likely already too late to squash Vine like a bug. Heck, Facebook couldn’t even get Poke and Messenger off the ground after incumbents clobbered the space. What makes anyone think Instagram video would be any different?

Vine launched in January of this year, just after the holidays, and spent a few months ramping up the user base before launching on Android a few weeks ago. At the time, Vine had 13 million downloads. Not too shabby for approximately five months of work. It took Vine a few days to swing to the top of the App Store, and the same was true on Google Play following the Android launch.
When Instagram launched on Android, seventeen months after launching on iOS, it had around 30 million users. Obviously, users are a different metric than downloads, but you can see how Vine’sgrowth is relatively astounding given the timeframe. Especially when you factor in the less pointed evidence: Vine shares have surpassed Instagram shares on Twitter, for example, or even just hearing the term “Vine it” regularly in every day life. And having Twitter as a parent company doesn’t hurt either.
Vine is already established, and better yet, making waves. Vine was used by the Tribeca Film Festival for a special #6SecFilm Contest. The app has been toyed with by designers andadvertisers to build new interactive music videos. Brands love Vine because it lets products move in ways that Twitter and Facebook don’t.
And Vine, of course, is still iterating quickly. We’ve seen the team respond to feature requests like the ability to use front-facing camera as well as rear-facing camera, and I wouldn’t be suprisedt to see interesting additions like Voiceover or Animation pop up soon.
Instagram is a powerful foe. The app has over 100 million users, and is now owned by the most powerful social network in the world. But this is far from the end of Vine.
First, Vine is the end product of what Instagram was built to be. Vine skipped past still photos, and filters to make those photos (taken with bad mobile cameras) look prettier, and the slow grind of adding @mentions and photo maps and all those iterative feature tweaks.
Instead, Vine launched as a true Instagram for video, which now has an active and seemingly happy user base. It’s not Twitter’s Cleaner fish, even if Twitter bought up the app and launched it into existence (unlike Instagram’s organic growth that was later bought up by Facebook).
But where Instagram feels like a consumption app first (a time sink, almost), Vine doesn’t. Scrolling through my Vine stream is like having a hangover during an earthquake. Most often, it’s a lot of clanging and wind noise coupled with shaky video of my friends’ latest vacation.
Credit - Tech Crunch. Jordan Crook

Thursday, June 13, 2013

Why Google is the big data company that matters most

Google Image Search just got a whole lot better, and the company’s purpose-built machine learning system infrastructure is a big reason why. No surprise, Jeff Dean helped build it.

Google’s system can recognize flowers even when they’re not in the focal point.


Every now and then, someone asks “Who’ll be the Google of big data?”. The only acceptable answer, it seems, is that Google is the Google of big data. Yeah, it’s a web company on the surface, but Google has been at the forefront of using data to build compelling products for more than a decade, and it’s not showing any signs of slowing down.
Search, advertising, Translate, Play Music, Goggles, Trends and the list goes on — they’re all products that couldn’t exist without lots of data. But data alone doesn’t make products great — they also need to perform fast and reliably, and they eventually need to get more intelligent. Infrastructure and systems engineering make that possible, and that’s where Google really shines.
On Wednesday, the company showed off its chops once again, explaining in a blog post how it’s able to let users better search their photos because it was able to train some novel models on systems built for just that purpose. Here’s how Google describes the chain of events, after it had found the methods it wanted to test (from the winning team at the ImageNet competition):
“We built and trained models similar to those from the winning team using software infrastructure for training large-scale neural networks developed at Google in a group started by Jeff Dean and Andrew Ng. When we evaluated these models, we were impressed; on our test set we saw double the average precision when compared to other approaches we had tried. …
“Why the success now? … What is different is that both computers and algorithms have improved significantly. First, bigger and faster computers have made it feasible to train larger neural networks with much larger data. Ten years ago, running neural networks of this complexity would have been a momentous task even on a single image — now we are able to run them on billions of images. Second, new training techniques have made it possible to train the large deep neural networks necessary for successful image recognition.”

Of course Google had a system in place for training large-scale neural networks. And of course Jeff Dean helped design it. 

ean is among the highlights of our upcoming Structure conference (June 19 and 20 in San Francisco). I’m going to sit down with him in a fireside chat and talk about all the cool systems Google has built thus far and what’s coming down the pike next. Maybe about what life is like being the Chuck Norris of the internet.
From an engineering standpoint, Dean has been one of the most important people in the short history of the web. He helped create MapReduce — the parallel processing engine underneath Google’s original search engine — and was the lead author on the MapReduce paper that directly inspired the creation of Hadoop. Dean has also played significant roles in creating other important Google systems, such as its BigTable distributed data store (which is the basis of NoSQL databases such as Cassandra, HBase and the National Security Agency’s Accumulo) and a globally distributed transactional database called Spanner.
If you’re into big data or webscale systems, knowing what Dean is working on can be like looking into a crystal ball. When I asked Hadoop creator Doug Cutting what the future holds for Hadoop, he told me to look at Google.
Credit - Giga OM

Saturday, June 1, 2013

Google Won’t Approve Glass Apps That Recognize People’s Faces… For Now



The potential creep factor of Google Glass is something that the search giant has to mitigate as best it can if it wants that kooky head-worn display to become a mass-market sensation (and even that may not be enough), but a recent announcement highlights the search giant’s commitment to, well, do no evil.
Google confirmed on its official Glass G+ pageearlier this evening that it won’t allow developers to create applications for the head-worn display that are capable of recognizing the faces of people the wearer encounters.
It’s no surprise that Google has been keen to downplay the idea of first-party face recognition features — Google Glass director Steve Lee gave the New York Times a near identical statementearlier this month — but now the company has made it clear that developers are subject to that same code of conduct.
That’s not to say that Google is throwing out the possibility of face-recognizing Glass apps in the future — the company just has to lock down a firm set of privacy protocols before letting developers run wild. As you’d expect, there’s no timetable in place yet so it’s still unclear when Glass will be able to chime in our ears with a long-forgotten acquaintance’s name. It may be a big win for privacy advocates, but the news doesn’t bode all that well for some of the early-stage startups that are angling to turn Glass into an ever-present recognition device. Consider the case of Lambda Labs — earlier this week the San Francisco team talked up its forthcoming facial and object recognition API that would allow developers to create applications with commands like “remember that face.” At the time, Lambda co-founder Stephen Balaban sought refuge in the fact that the Glass API didn’t explicitly bar the creation of face-recognition apps, a shelter that no longer exists. To quote the updated Glass developer policies:
Don’t use the camera or microphone to cross-reference and immediately present personal information identifying anyone other than the user, including use cases such as facial recognition and voice print. Applications that do this will not be approved at this time.
For now, though, Google seems all right with the prospect of using Glass to recognize individual, people so long as their faces aren’t the things being kept track of. Back in March, news broke of apartially Google-funded project from Duke University that saw researchers create a Glass app that let users identify people not by their faces but by a so-called “fashion fingerprint” that accounts for clothing and accessories. All things considered, it’s a neat way to keep tabs on individual people with a privacy mechanism baked into our behavior — all you need to do to be forgotten is change your clothes.
Credit - TC