- Aug 17, 2014
The big picture: When Google unveiled its new Pixel 6 phones today, it spent a good amount of its presentation talking about Google's Tensor chip. The company points out how the new system on a chip (SoC) uses machine learning to enhance many of the phone's features.
Both of the Pixel 6 phones will incorporate Google's new Tensor processor. The SoC lies at the core of Google's machine learning model and enhances features like Google Assistant, Google Translate, Photos, and even mundane phone calls.
When texting on the Pixel 6, Tensor can make speech recognition more accurate. Google showed off how it can do things like insert words into a transcription when you want to amend what you just said, match spoken names with names on your contact list, and automate punctuation. Instead of making corrections based on key proximity, speech recognition on the Pixel 6 will make corrections based on phonetics.
Pixel 6 and Tensor also introduce a "Live Translate" feature for Google Translate, which can translate conversations between people speaking different languages in real-time. It can do this directly inside chat apps like Message and WhatsApp. It can also apply speech recognition and translation to videos.
The presentation also went into how Tensor helps the Pixel 6 easily make changes to photos. Google's Magic Eraser feature can cleanly remove what Google calls "distractions" from photos, like extra people and objects. Users can easily edit images right on the Pixel 6 to remove, change, or add motion blur in photos so users can control the sense of motion they convey. It can also read and translate text within photos.
Both of the Pixel 6 phones will incorporate Google's new Tensor processor. The SoC lies at the core of Google's machine learning model and enhances features like...