Google’s ambient computing vision is changing how the company works – The Verge
The story of this years Google I/O actually started three years ago.
At I/O 2019, onstage at the Shoreline Auditorium in Mountain View, California, Rick Osterloh, Googles SVP of devices and services, laid out a new vision for the future of computing. In the mobile era, smartphones changed the world, he said. Its super useful to have a powerful computer wherever you are. But he described an even more ambitious world beyond that, where your computer wasnt a thing in your pocket at all. It was all around you. It was everything. Your devices work together with services and AI, so help is anywhere you want it, and its fluid. The technology just fades into the background when you dont need it. So the devices arent the center of the system you are. He called the idea ambient computing, nodding to a concept that has floated around Amazon, Apple, and other companies over the last few years.
One easy way to interpret ambient computing is around voice assistants and robots. Put Google Assistant in everything, yell at your appliances, done and done. But thats only the very beginning of the idea. The ambient computer Google imagines is more like a guardian angel or a super-sentient Star Wars robot. Its an engine that understands you completely and follows you around, churning through and solving for all the stuff in your life. The small (whens my next appointment?) and the big (help me plan my wedding) and the mundane (turn the lights off) and the life-changing (am I having a heart attack?). The wheres and whens and hows dont matter, only the whats and whys. The ambient computer isnt a gadget its almost a being; its the greater-than-the-sum-of-its-parts whole that comes out of a million perfectly-connected things.
Which is a problem for Google. The company is famously decentralized and non-hierarchical, and it can sometimes seem like every single engineer on staff is given the green light to ship whatever they made that week. And so, since that day in 2019, Google has mostly continued to do what Google always does, which is build an unbelievable amount of stuff, often without any discernible strategy or plan behind it. Its not that Google didnt have a bigger vision; its just that no one seemed to be interested in doing the connective work required for the all-encompassing, perfectly-connected future Osterloh had imagined. Google was becoming a warehouse full of cool stuff rather than an ecosystem.
But over the last couple of years, Google has begun to change in order to meet this challenge. Osterlohs devices team, for instance, has completely reset its relationship with the Android team. For a long time, the company proudly maintained a wall between Pixel and Android, treating its internal hardware team like any other manufacturer. Now, Google treats Pixel like the tip of the spear: its meant to be both a flagship device and a development platform through which Google can build features it then shares with the rest of the ecosystem.
We really sort of co-design where things are headed, Osterloh said. I think thats just sort of the nature of how computers have changed, and computing models have changed. Both teams share visions of an ambient future, he said. And were working on it together.
Around the company, these related teams and products are starting to come closer together. Theyre building on unified tech, like Googles custom Tensor processors, and on common concepts like conversational AI.
As a result, Google I/O feels unusually coherent this year. Google is trying harder than I can remember to build products that not only work well but work well together. Search is becoming a multisensory, multi-device proposition that understands both whos searching and what theyre really looking for. Its also extending the search experience far beyond just questions and answers. Its making Android more context- and content-aware so that your phone changes to match the things you do on it. Its emphasizing natural interactions so that you can get information without memorizing a rigid set of commands. Its building the hardware ecosystem it needs to make all that work everywhere and the software to match.
Now, lets be very clear: Googles work is only just beginning. It has to win market share in device categories it has failed for years to capture. It has to build new experiences inside new and old devices. It has to figure out how to solve Android fragmentation between its devices and the market-leading devices from companies like Samsung, which might be the hardest part of all. And it has to become more present in users lives and extract more information from them, all without upsetting regulators, screwing up the search-ads business, or violating users privacy. The ambient computer was never going to come easily, and Google has made its own efforts harder in countless ways over the years.
But at the very least, the company seems to finally understand what an ambient computer requires and why it has Assistant! is not a sufficient answer and is beginning the work to get it done.
When I sat down with Osterloh over a video call a few days before I/O, he began to wax poetic about Googles hardware division, every so often glancing down just out of frame. I asked what he was looking at, suddenly suspicious it was a bunch of unreleased devices. I was right. This is Pixel 6, he said, holding up his current device. And then, I have a Pixel 6A, in a very disguised case, this time holding up a black brick of rubber surrounding the unreleased device. And I have a Pixel 7, also in disguise. He held up his wrist, too, with a Pixel Watch strapped to it.
As we talked, Osterloh reiterated the usual Google pitch for ambient computing, but this time, it came with a bit of a twist to the familiar. The long-term vision is still an always-there version of Google that works everywhere with everything all the time, but right now? Right now, its still all about the ultra-fast processor in your pocket. Certainly for the foreseeable future, we feel like the most crucial part of that is the pocketable computer, the mobile phone, he said. The smartphone is the center of the computing universe for billions of users around the globe, and so the first version of Osterlohs ambient computer will be built around a smartphone, too.
Thats why, when Google set out to make the Pixel 6A which is largely a cost-cutting exercise, trying to turn a $600 phone into a still-credible $449 one one expensive part survived the cut. The target of what Pixel is, is about having an awesome user experience that keeps getting better over time, Osterloh said. And with that as the core, what you realize is like the thing that is essential to have across these devices is Tensor.
The Google-designed Tensor processor was the key feature introduced alongside the Pixel 6, largely as a way to improve its on-device AI capabilities for speech recognition and more. And now, it seems, its going to be a staple of the line: Osterloh said all the upcoming Pixel phones and even the Android-powered tablet the team is working on for release next year will run on its Tensor processor.
The smartphone is the center of the universe for now, but you can already start to see how that might change. The new Pixel Buds Pro are a powerful set of noise-canceling headphones, for instance, but also a hands-free interface into a wirelessly connected computing device.
Devices that can be close to your ear and enable you to have real-time communication with the computer are an absolutely essential part of ambient computing, Osterloh said, noting that he now does most of his emailing via voice. Similarly, the new Pixel Watch is, in some ways, a phone accessory, delivering notifications and the like to your wrist and offering another interface to the same power in your pocket. But Googles also selling an LTE version, so youll be able to access Assistant or pay with Google Wallet without needing your phone nearby. And that tablet, whenever it comes, will have all the same Pixel capabilities in a bigger shell.
The point is that it doesnt matter, in the long run, which device you use. Where the computing capability is, and how powerful the devices themselves are, shouldnt matter to the user, Osterloh said. I think what they should see is increasing capabilities.
The Pixel and Android teams have recently adopted a sort of mantra: Better Together. Much of whats new this year in Android 13 is not whizbang new features but small tweaks meant to make the ecosystem a little more seamless. Through an update to the Chrome OS Phone Hub feature, youll be able to use all your messaging apps on your Chromebook just as you would on your phone. Support for the Matter smart home standard now comes built into Android, which should make setting up and controlling new devices much easier. Googles extending support for its Cast protocols for sending audio and video to other devices and improving its Fast Pair services to make it easy to connect Bluetooth devices. It has been talking about these features since CES in January and has signed up an impressive list of partners.
It sounds a bit like Google finally watched an Apple ad and discovered that making hardware and software together really does help. Who knew! But Googles position is genuinely tricky here. Googles ad business relies on a mind-bendingly huge scale, which it gets mostly thanks to other companies building Android products. That means Google has to keep all those partners happy and feeling like theyre on a level playing field with the Pixel team. And it simply cant control its ecosystem like Apple can. It is forever worrying about backward compatibility and how things will work on devices of all sizes, prices, and power. It has to engender support to make big changes, whereas Apple just brute-forces the future.
But Google has become increasingly bold in pushing ahead with the Pixel brand. It can afford to because Pixel is hardly a real sales threat to Samsung and others. (Besides, where are Android manufacturers going to go? Windows Mobile?) But it also has to because Google only wins if the ecosystem buys in, and Pixel is Googles best chance to model what the entire Android ecosystem should look like. Thats what Osterloh sees as his job and, in large part, his teams reason for being.
If Pixels never going to be a smash-hit bestseller (and it looks like it wont be), the only way Google can win in the long run is to use it as a way to pressure Samsung and others to keep up Googles features and ideas. Google has a chance to lead even more in tablets and smartwatches, two Android markets in desperate need of a path forward. Phone is certainly super important, said Sameer Samat, a VP of product management on the Android team. But its also becoming very clear that there are other device form factors which are complementary and also critical to a consumer deciding which ecosystem to buy into, and which ecosystem to live.
Thats another way of saying the only way Google can get to its ambient computing dreams is to make sure Google is everywhere. Like, literally everywhere. Thats why Google continues to invest in products in seemingly every square inch of your life, from your TV to your thermostat to your car to your wrist to your ears. The ambient-computing future may be one computer to rule them all, but that computer needs a near-infinite set of user interfaces.
The second step to making ambient computing work is to make it really, really easy to use. Google is relentlessly trying to whittle away every bit of friction involved in accessing its services, particularly the Assistant. For instance, if you own a Nest Hub Max, youll soon be able to talk to it just by looking into its camera, and youll be able to set timers or turn off the lights without issuing a command at all.
Its kind of like you and I having a conversation, said Nino Tasca, a director of product management on Googles speech team. Sometimes, Ill use your name to start a conversation. But if Im already staring at you and ask you directly, you know Im talking to you. Google is obsessed with making everything natural and conversational because its convinced that making it easy is actually more important than making it fast.
The same logic applies to search, which is quickly becoming a multi-sensory, multi-modal thing. The way you search for information shouldnt be constrained to typing keywords into a search box, said Prabhakar Raghavan, Googles SVP for knowledge and information products. Our vision is to make the whole world around you searchable, so you can find helpful information about whatever you see, hear and experience, in whichever ways are most natural to you.
That has forced Google to reinvent both the input of search, leaning on voice and images just as much as the text box, as well as the output. Some people really find it easy to process video, said Liz Reid, a VP of engineering on the search team, and other people will find it distracting. On the other hand, some peoples literacy is not as good, and so a long web page that you have to read through not only takes time, but theyre gonna get lost, and a video thats spoken in their language is really intuitive. Google built one hell of a text box, but its not enough anymore.
The most obvious outpouring of that work is multisearch. Using the Google app, you can take a photo of a dress in Googles examples, its always a dress and then type green to search for that dress but in green. Thats the kind of thing you just couldnt do in a text box.
And at I/O, Google also showed off a tool for running multisearch on an image with multiple things in it: take a picture of the peanut butter aisle, type nut-free, and Google will tell you which one to buy. We find when we unlock new capabilities that people had information needs that were just too hard to express, Reid said. And then they start expressing them on there. Search used to be one thing, Lens was another, voice was a third, but when you combine them, new things become possible.
But the real challenge for Google is that its much more than a question-and-answer engine now. Whats best isnt really, in many of these cases, a strict stack rank, right? Reid said. A lot of these newer use cases, theres a style or a taste component. Shopping has become important to Google, for instance, but theres no single correct answer for best t-shirt. Plus, Google is using search more and more as a way to keep you inside Googles ecosystem; the search box is increasingly just a launcher to various Google things.
So rather than just aiming to understand the internet, Google has to learn to understand its users better than ever. Does it help that Google has a massive store of first-party data that it has collected over the last couple of decades on billions of people around the world? Of course it does! But even that isnt enough to get Google where its going.
About the ads: dont forget the fact that even in a world outside the search box, Googles still an advertising business. Just as Amazons ambient computing vision seems to always come back to selling you things, Googles will always come back to showing you ads. And the thing about Googles whole vision is that it means a company that knows a lot about you and seems to follow you everywhere will know even more about you and follow you even more places.
Google seems to be going out of its way to try and make users feel comfortable with its presence: its moving more AI to devices themselves instead of processing and storing everything in the cloud, its pushing toward new systems of data collection that dont so cleanly identify an individual, and its offering users more ways to control their own privacy and security settings. But the ambient-computer life requires a privacy tradeoff all the same, and Google is desperate to make it good enough that its worth it. Thats a high bar and getting higher all the time.
Actually, this whole process is full of high bars for Google. If it wants to build an ambient computer that can truly be all things to all people, its going to need to build a sweeping ecosystem of hugely popular devices that all run compatible software and services while also seamlessly integrating with a massive global ecosystem of other devices, including those made by its direct competitors. And thats just to build the interface. Past that, Google has to turn the Assistant into something genuinely pleasurable to interact with all day and make its services flex to every need and workflow of users across the globe. Nothing about that will be easy.
But if you squint a little, you can see what it would look like. And thats what has been so frustrating about Googles approach in recent years: it feels like all the puzzle pieces to the future are sitting there in Mountain View, strewn around campus with no one paying attention. But now, as a company, Google appears to be starting to assemble them.
See the original post:
Google's ambient computing vision is changing how the company works - The Verge