Microsoft & Sony to partner on embedding AI on Chips

By Rahul Vaimal, Associate Editor
  • Follow author on
Representational Image

Japanese multinational, Sony Corp. and Microsoft Corp. have partnered to embed AI (Artificial Intelligence) capabilities into the Japanese company’s latest imaging chip, a big boost for a camera product the electronics giant describes as a world-first for commercial customers.

The new module’s big advantage is that it has its own processor and memory built-in, which enables it to analyze video using AI tech like Microsoft’s Azure, but in a self-contained system that’s faster, simpler and more secure to operate than existing methods.

The two companies are inviting to retail and logistics businesses with potential uses like optimizing warehouse and factory automation, quantifying the flow of customers through stores and making cars more intelligent about their drivers and environment.

At a time of growing public surveillance to help rein in the spread of the novel coronavirus, this new smart camera also has the potential to allow more privacy-conscious monitoring. And should its technology be accommodated for personal devices, it even holds promise for advancing mobile photography.

Instead of generating actual images, Sony’s AI chip can examine the video it sees and provide just metadata about what’s in front of it — saying instead of showing what’s in its frame of vision. Because no data is sent to remote servers, opportunities for hackers to intercept sensitive images or videos are dramatically decreased, which should help moderate privacy fears.

Apple Inc. has already proven the effectiveness of combining AI and imaging to create more secure systems with its Face ID biometric authentication, powered by the iPhone’s custom-designed Neural Engine processor. Huawei Technologies Co. and Alphabet Inc.’s Google also have dedicated AI silicon in their smartphones to support image processing. These on-device chips outline what’s known as edge computing: handling complex AI and machine-learning tasks at the so-called edge of the network instead of sending data back and forth to servers.

Hideki Somemiya,
Sr. GM – System Solutions Group, SONY

“We are aware many companies are developing AI chips and it’s not like we try to make our AI chip better than others. Our focus is on how we can distribute AI computing across the system, taking cost and efficiency into consideration. Edge computing is a trend, and in that respect, ours is the edge of the edge.”

Sony’s approach is to reduce the need for changes within the device itself. Whereas Apple and Google still use conventional image sensors that convert light particles into computer-readable image formats for their chips to read, Sony’s new part is capable of doing the analytical work without any data leaving its physical boundaries.

Sony already enjoys a substantial lead as the world’s leading provider of image sensors, counting Apple, Samsung Electronics Co. and every major Chinese smartphone maker among its customers along with pro camera stalwarts like Hasselblad V, Fujifilm Holdings Corp. and Nikon Corp.

YOU MAY LIKE