How 2017 became the year of Smartphone AI
Big players like Huawei and Apple made headway
2017 is perhaps one of the biggest turning points in the smartphone industry. Not because everyone attempted to shift to the 18:9 screen aspect ratio (or 18.5:9 in the case of Samsung) since that’s just a minor cosmetic transition but the introduction of a native AI engine into the heart of the mobile chipset.
Google Home and Alexa speakers use Cloud AI to interact with users.
Who are the major players in Mobile AI?
- Google and Amazon started as pure Cloud AI but had to venture into IoT devices like Google Home and Alexa to get traction. Google had an advantage due to Android platform.
- Huawei started with homebrewed Machine Learning on their smartphones like the Mate 9 series and escalated their efforts with the Kirin 970 AI chip.
- Apple started in Cloud AI with Siri integrated into their iOS devices and eventually developed the A11 Bionic chip.
The development of a smartphone AI technology started a couple years back with Apple, Amazon, and Google introducing AI personal assistants like Siri, Alexa, and Google Now. Even Microsoft managed to get into the saddle with Cortana without the need for a Windows phone and Samsung attempted something similar with Bixby, although it has very little impact due to incremental release and limited compatibility with Galaxy smartphones.
Apple’s Siri started out as an intelligent personal assistant for the iPhone.?
Google Now and Siri is perhaps the most familiar among the current set of players because of the vast number of Android and iOS devices in use.
It all started with Machine Learning
Before AI, the initial attempts to make smartphones a bit smarter is thru Machine Learning (ML). These are computer programs that continuously learns from gathering personalized data, processing them for specific results and using the results to predict future decisions or taking shortcuts in solving similar tasks.
Google maps calculate best possible driving routes based on crowdsourced traffic data.?
A good example of this is on Google Maps. The machine learning algorithms collect all your location data, travel times and searches to help predict destinations, suggested locations and it can even intelligently guess where you usually park your car.
You also see this with apps such as Waze where it computes traffic times and speed of vehicles based on crowd-sourced data in order to give you personalized routes to your desired destinations.
Huawei Mate 10 Pro uses AI in identifying subject to produce best possible image.
Another example is in smartphone photography where smartphones try to identify the subject or environmental conditions and apply a specific set of filters or color-grading in order to produce the best-looking results. Even some of the front-facing cameras use some sort of machine learning in order to set the best Beauty Shot/Mode, like the one employed by OPPO for their AI Beauty Recognition Technology. Apps like Snapchat and Meitu have similar algorithms as well.
Embedded machine learning in smartphones also helps in managing resources, identifying priority tasks and apps in order to boost the performance of the device. We first saw this in Huawei’s Mate 9 and P10 series where machine learning helps improve the performance of the device, keep it as snappy and responsive even after a year of use.
Challenges of Mobile AI
AI and ML require a lot of data and computing power so most of the time the vast collection of information and the processing of such data are done thru the cloud, reducing the strain on the smartphone’s CPU.
However, this means that the smartphone needs to be always connected to the internet in order for these functions to work properly. Likewise, the response time of the device will be greatly affected due to latency.
Apple’s A11 Bionic chip’s primary AI function is for Face ID.
To solve these hurdles, manufacturers started to incorporate a dedicated AI chip into their smartphone chips.
Apple introduced the A11 Bionic chip that has a Neural Engine running the iPhone 8, 8 Plus and iPhone X. The A11 comes with a dedicated neural network hardware that can execute up to 600 billion operations per second, more efficient than the CPU or GPU.
The iPhone X with Animoji feature.
Apple uses the Neural Engine for Face ID and the Animoji feature, among others. Face ID is crucial for the iPhone X since Apple practically removed the fingerprint scanner (Touch ID), claiming that Face ID is more secure.
Huawei pushes ahead with Kirin 970 AI
Huawei, on the other hand, developed the Kirin 970 which is the first mobile chip to come with an NPU (Neural Processing Unit). The dedicated NPU can handle up to 1.92 trillion FP16 operations per second. This is on top of the 2.4GHz octa-core CPU and 12-core GPU which is built on the 10nm FinFET process with 5.5 billion transistors (compared to the 3.1 billion on the Snapdragon 835 and the 4.3 billion on the A11).
The Huawei Mate 10 Pro uses the Kirin 970, the first mobile chip with neural processing unit.
The Kirin 970 was originally introduced with the Mate 10 and Mate 10 Pro but was also included in the Honor V10 as well. What makes the Kirin 970 more interesting is wide uses of the NPU in many features of the phone as we’ve seen in the Mate 10 and Mate 10 Pro.
We’ve seen the Kirin 970 in live test being able to identify up to 2,000 images per second. This ability helps improve photography features of the Mate10/Mate10 Pro by quickly identifying the subject and adjusting image processing based on the scenario.
The Mate 10 Pro uses dual Leica optics to produce great photos.
The chip also supports AI-powered noise reduction technology that helps in reducing noise and improving voice signals. This is especially useful when using speech-recognition software as it will improve accuracy rates.
The Mate 9 was the first to use Machine Learning while the Mate 10 comes with a dedicated AI chip.
Huawei’s operating system machine learning has been offloaded to the NPU to monitor app usage patterns and optimize the device based on personalized usage behavior. It was originally introduced in the Mate 9 and continue to be employed on the Mate 10 series. This allows the device to deliver optimal performance even after months of regular use.
Mate 10/10 Pro: Huawei’s best-performing flagship smartphones running on Kirin 970 AI.?
There are other AI features that Huawei has yet to roll out to their smartphones but one of them was demonstrated a couple months back. Huawei’s depth-sensing camera system is able to capture about 300,000 points in seconds allowing for facial tracking and secure logins in just 0.4 seconds.
Mobile AI is just getting started.
This is just the beginning of smartphone AI and among the many smartphone manufacturers, Huawei seems to be the more aggressive and innovative player leading everyone else to that direction.
As Huawei opens up their HiAI mobile computing architecture to developers, we could be seeing more AI-enhanced user-experience on our mobile phones. The strategy is to combine on-device AI with Cloud AI to enhance mobile AI.
We think other chip manufacturers will soon follow. Qualcomm’s new Snapdragon 845 has native AI enhancements with support for Tensorflow Lite and Open Neural Network Exchange frameworks. We should see how that’s implemented in devices when smartphones running the Snapdragon 845 are released later this quarter.
MediaTek has also announced last month that they will put more focus on dedicated AI chips on their upcoming P-series SoCs this 2018.
The more chipmakers jump into the mobile AI platform, the faster the developer community integrate these capabilities into their applications. If 2017 was the turning point for mobile AI, 2018 will be a dominating year for AI in many smartphones.