STM32Cube.AI extension pack, enabling AI

Mr. Vinay Thapliyal ,Technical Marketing Manager, MCD, INDIA, STMicroelectronics
Mr. Vinay Thapliyal ,Technical Marketing Manager, MCD, INDIA, STMicroelectronics

Interview with Mr. Vinay Thapliyal, Technical Marketing Manager, MCD, INDIA, STMicroelectronics

What is the current scenario of Artificial Intelligence in India?

Artificial Intelligence (AI) in India is in its initial stages and is mainly implemented in the Cloud, where the almost unlimited processing performance is balanced by bandwidth and latency. Considering India’s fast-growing economy and its large population, there is a strong potential for AI use cases and many of these could be better suited to Edge nodes. Acknowledging the potential this technology has to transform economies and the need for India to strategize its approach to AI, the Indian Government has begun investing in a national program on AI.

What are the key focus areas for India AI market?

a. Today, AI is mainly implemented in the Cloud, where the algorithms and applications can be served by large amounts of fast processing capabilities, but where bandwidth requirements and latency are high. The development of new technologies such as 5G, and capabilities like ST’s STM32Cube.AI, where trained neural nets can be run on STM32 MCUs in edge devices, will speed the adoption of AI use cases in the future.

b. Some of these key use cases include applications in Healthcare, Agriculture, Smart Industry, Smart Cities, Smart Mobility and Energy.

What are the key opportunities ST looking at?

With the increasing number of IoT use cases worldwide, billions of IoT devices are connected to systems and are producing enormous amounts of data. This incredible amount of “unstructured data” needs to be reconstructed in a way that it can be efficiently and effectively used and this is where AI can play a vital role. Similarly, there are many IoT use cases which need real-time execution and minimal latency. ST is therefore looking into means of distributing AI at the edge and this is where the STM32Cube.AI expansion pack for STM32 MCUs can be a powerful tool. Some benefits of AI at the edge include:

  • Real-time processing to ensure low latency response
  • Connectivity availability and bandwidth
  • Power consumption
  • Data privacy and security
  • Data sorting, filtering and pre-processing at the Edge before the Cloud
  • Offload cloud processing
What are the basic problems faced by embedded developers in building AI applications?

a. Nowadays, there are various popular deep learning tools that produce Neural Network code at sizes which are quite large. They are optimized for and run very well on high-performance, centralized, computing machines and processors rather than the smaller microprocessors and microcontrollers that would be used in distributed, edge-based embedded applications. Indeed, one of the key constraints for embedded developers relates to the difficulties of fitting the computational power and memory footprints of deep learning AI libraries on small memory footprints and medium speed microcontrollers.

b. Furthermore, developers who tend to specialize in the types of embedded systems that use microcontrollers like our STM32 MCUs may be less familiar with the latest advances in Neural Networks. Similarly, data scientists working on Machine Learning with nearly unlimited centralized and/or cloud resources may be strangers to the memory and computational constraints of embedded platforms.

How is STM32 microcontroller solutions help designers in building AI applications?

a. ST developed solutions for Artificial Intelligence to enable developers to map and run pre-trained Artificial Neural Networks (ANN) on STM32 Arm Cortex-M based microcontrollers. The STM32Cube.AI tool bridges the gap by translating these pre-trained ANNs to run on STM32 MCUs. We developed the STM32Cube.AI expansion pack to encourage and simplify edge computing by enabling our MCUs to run inferences. In other words, STM32Cube.AI has brought Neural Networks to embedded systems, today using Artificial Neural Network mapping made simple with the STM32Cube.AI :

  • Interoperable with popular deep learning training tools
  • Compatible with many IDEs and compilers
  • Sensor and RTOS agnostic
  • Allows multiple Artificial Neural Networks to be run on a single STM32 MCU
  • Full support for ultra-low-power STM32 MCUs

b. 5 simple steps to deploy a Neural Network using STM32Cube.AI

c. More details available on www.st.com/stm32cubeAI

Share this post