2017 is perhaps one of the biggest turning points in the smartphone industry. Not because everyone attempted to shift to the 18:9 screen aspect ratio (or 18.5:9 in the case of Samsung) since that’s just a minor cosmetic transition but the introduction of a native AI engine into the heart of the mobile chipset.
The development of a smartphone AI technology started a couple years back with Apple, Amazon and Google introducing AI personal assistants like Siri, Alexa and Google Now. Even Microsoft managed to get into the saddle with Cortana without the need for a Windows phone and Samsung attempted something similar with Bixby although it has very little impact due to incremental release and limited compatibility with Galaxy smartphones.
Google Now and Siri is perhaps the most familiar among the current set of players because of the vast number of Android and iOS devices in use. Before AI, the initial attempts to make smartphones a bit smarter is thru Machine Learning (ML). These are computer programs that continuously learns from gathering personalized data, processing them for specific results and using the results to predict future decisions or taking shortcuts in solving similar tasks.
A good example of this is with Google Maps. The machine learning algorithms collect all your location data, travel times and searches to help predict destinations, suggested locations and it can even intelligently guess where you usually park your car.
You also see this with apps such as Waze where it computes traffic times and speed of vehicles based on crowd-sourced data in order to give you personalized routes to your desired destinations.
Another example is in smartphone photography where smartphones try to identify the subject or environmental conditions and apply a specific set of filters or color-grading in order to produce the best-looking results. Even some of the front-facing cameras use some sort of machine learning in order to set the best Beauty Shot/Mode, like the one employed by OPPO for their AI Beauty Recognition Technology.
Embedded machine learning in smartphones also help in managing resources, identifying priority tasks and apps in order to boost performance of the device. We first saw this in Huawei’s Mate 9 where machine learning helps improve performance of the device, keep it as snappy and responsive even after a year of use.
AI and ML requires a lot of data and computing power so most of the time the vast collection of information and the processing of such data are done thru the cloud, reducing the strain on the smartphone’s CPU.
However, this means that the smartphone needs to be always connected to the internet in order for these functions to work properly. Likewise, the response time of the device will be greatly affected due to latency.
To solve these hurdles, manufacturers started to incorporate a dedicated AI chip into their smartphones.
Apple introduced the A11 Bionic chip that has a Neural Engine running the iPhone 8, 8 Plus and iPhone X. The A11 comes with a dedicated neural network hardware that can execute up to 600 billion operations per second, more efficient than the CPU or GPU.
Apple uses the Neural Engine for Face ID and the Animoji feature, among others. Face ID is crucial for the iPhone X since Apple practically removed the fingerprint scanner (Touch ID), claiming that Face ID is more secure.
Huawei, on the other hand, developed the Kirin 970 which is the first mobile chip to come with a dedicated NPU (Neural Processing Unit).
The Kirin 970 was originally introduced with the Mate 10 and Mate 10 Pro but was also included in the Honor V10 as well. What makes the Kirin 970 more interesting is wide uses of the NPU in many features of the phone as we’ve seen in the Mate 10 and Mate 10 Pro.
We’ve seen the Kirin 970 in live test being able to identify up to 2,000 images per second. This ability helps improve photography features of the Mate10/Mate10 Pro by quickly identifying the subject and adjusting image processing based on the scenario.
The chip also supports AI-powered noise reduction technology that helps in reducing noise and improving voice signals. This is especially useful when using speech-recognition software as it will improve accuracy rates.
Huawei’s machine learning has been offloaded to the NPU to monitor app usage patterns and optimize the device based on personalized usage behavior. It was originally introduced in the Mate 9 and continue to be employed on the Mate 10 series. This allows the device to deliver optimal performance even after months of regular use.
There are other AI features that Huawei has yet to roll out to their smartphones but one of them was demonstrated a couple months back. Huawei’s depth-sensing camera system is able to capture about 300,000 points in seconds allowing for facial tracking and secure logins in just 0.4 seconds.
This is just the beginning of smartphone AI and we see more players becoming more aggressive towards this direction.
As Huawei opens up their HiAI mobile computing architecture to developers, we could be seeing more AI-enhanced user-experience on our mobile phones. The strategy is to combine on-device AI with Cloud AI to enhance mobile AI.
Other chip manufacturers will follow soon — Qualcomm’s new Snapdragon 845 has native AI enhancements with support for Tensorflow Lite and Open Neural Network Exchange frameworks. We should see how that’s implemented in devices when smartphones running the Snapdragon 845 are released later this quarter.
MediaTek has also announced last month that they will put more focus on dedicated AI chips on their upcoming P-series SoCs this 2018.
The more chipmakers jump into the mobile AI platform, the faster the developer community integrate these capabilities into their applications. If 2017 was the turning point for mobile AI, 2018 will be dominating year for AI in many smartphones.