Increasing the Accessibility of Machine Learning at the Edge – Industry Articles – All About Circuits

In recent years, connected devices and the Internet of Things (IoT) have become omnipresent in our everyday lives, be it in our homes and cars or at our workplace. Many of these small devices are connected to a cloud servicenearly everyone with a smartphone or laptop uses cloud-based services today, whether actively or through an automated backup service, for example.

However, a new paradigm known as "edge intelligence" is quickly gaining traction in technologys fast-changing landscape. This article introduces cloud-based intelligence, edge intelligence, and possible use-cases for professional users to make machine learning accessible for all.

Cloud computing, simply put, is the availability of remote computational resources whenever a client needs them.

For public cloud services, the cloud service provider is responsible for managing the hardware and ensuring that the service's availability is up to a certain standard and customer expectations. The customers of cloud services pay for what they use, and the employment of such services is generally only viable for large-scale operations.

On the other hand, edge computing happens somewhere between the cloud and the clients network.

While the definition of where exactly edge nodes sit may vary from application to application, they are generally close to the local network. These computational nodes provide services such as filtering and buffering data, and they help increase privacy, provide increased reliability, and reduce cloud-service costs and latency.

Recently, its become more common for AI and machine learning to complement edge-computing nodes and help decide what data is relevant and should be uploaded to the cloud for deeper analysis.

Machine learning (ML) is a broad scientific field, but in recent times, neural networks (often abbreviated to NN) have gained the most attention when discussing machine learning algorithms.

Multiclass or complex ML applications such as object tracking and surveillance, automatic speech recognition, and multi-face detection typically require NNs. Many scientists have worked hard to improve and optimize NN algorithms in the last decade to allow them to run on devices with limited computational resources, which has helped accelerate the edge-computing paradigms popularity and practicability.

One such algorithm is MobileNet, which is an image classification algorithm developed by Google. This project demonstrates that highly accurate neural networks can indeed run on devices with significantly restricted computational power.

Until recently, machine learning was primarily meant for data-science experts with a deep understanding of ML and deep learning applications. Typically, the development tools and software suites were immature and challenging to use.

Machine learning and edge computing are expanding rapidly, and the interest in these fields steadily grows every year. According to current research, 98% of edge devices will use machine learning by 2025. This percentage translates to about 18-25 billion devices that the researchers expect to have machine learning capabilities.

In general, machine learning at the edge opens doors for a broad spectrum of applications ranging from computer vision, speech analysis, and video processing to sequence analysis.

Some concrete examples for possible applications are intelligent door locks combined with a camera. These devices could automatically detect a person wanting access to a room and allow the person entry when appropriate.

Due to the previously discussed optimizations and performance improvements of neural network algorithms, many ML applications can now run on embedded devices powered by crossover MCUs such as the i.MX RT1170. With its two processing cores (a 1GHz Arm Cortex M7 and a 400 MHz Arm Cortex-M4 core), developers can choose to run compatible NN implementations with real-time constraints in mind.

Due to its dual-core design, the i.MX RT1170 also allows the execution of multiple ML models in parallel. The additional built-in crypto engines, advanced security features, and graphics and multimedia capabilities make the i.MX RT1170 suitable for a wide range of applications. Some examples include driver distraction detection, smart light switches, intelligent locks, fleet management, and many more.

The i.MX 8M Plus is a family of applications processors that focuses on ML, computer vision, advanced multimedia applications, and industrial automation with high reliability. These devices were designed with the needs of smart devices and Industry 4.0 applications in mind and come equipped with a dedicated NPU (neural processing unit) operating at up to 2.3 TOPS and up to four Arm Cortex A53 processor cores.

Built-in image signal processors allow developers to utilize either two HD camera sensors or a single 4K camera. These features make the i.MX 8M Plus family of devices viable for applications such as facial recognition, object detection, and other ML tasks. Besides that, devices of the i.MX 8M Plus family come with advanced 2D and 3D graphics acceleration capabilities, multimedia features such as video encode and decode support including H.265), and 8 PDM microphone inputs.

An additional low-power 800 MHz Arm Cortex M7 core complements the package. This dedicated core serves real-time industrial applications that require robust networking features such as CAN FD support and Gigabit Ethernet communication with TSN capabilities.

With new devices comes the need for an easy-to-use, efficient, and capable development ecosystem that enables developers to build modern ML systems. NXPs comprehensive eIQ ML software development environment is designed to assist developers in creating ML-based applications.

The eIQ tools environment includes inference engines, neural network compilers, and optimized libraries to enable working with ML algorithms on NXP microcontrollers, i.MX RT crossover MCUs, and the i.MX family of SoCs. The needed ML technologies are accessible to developers through NXPs SDKs for the MCUXpresso IDE and Yocto BSP.

The upcoming eIQ Toolkit adds an accessible GUI; eIQ Portal and workflow, enabling developers of all experience levels to create ML applications.

Developers can choose to follow a process called BYOM (bring your own model), where developers build their trained models using cloud-based tools and then import them to the eIQ Toolkit software environment. Then, all thats left to do is select the appropriate inference engine in eIQ. Or the developer can use the eIQ Portal GUI-based tools or command line interface to import and curate datasets and use the BYOD (bring your own data) workflow to train their model within the eIQ Toolkit.

Most modern-day consumers are familiar with cloud computing. However, in recent years a new paradigm known as edge computing has seen a rise in interest.

With this paradigm, not all data gets uploaded to the cloud. Instead, edge nodes, located somewhere between the end-user and the cloud, provide additional processing power. This paradigm has many benefits, such as increased security and privacy, reduced data transfer to the cloud, and lower latency.

More recently, developers often enhance these edge nodes with machine learning capabilities. Doing so helps to categorize collected data and filter out unwanted results and irrelevant information. Adding ML to the edge enables many applications such as driver distraction detection, smart light switches, intelligent locks, fleet management, surveillance and categorization, and many more.

ML applications have traditionally been exclusively designed by data-science experts with a deep understanding of ML and deep learning applications. NXP provides a range of inexpensive yet powerful devices, such as the i.MX RT1170 and the i.MX 8M Plus, and the eIQ ML software development environment to help open ML up to any designer. This hardware and software aims to allow developers to build future-proof ML applications at any level of experience, regardless of how small or large the project will be.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.

See more here:
Increasing the Accessibility of Machine Learning at the Edge - Industry Articles - All About Circuits

Related Posts

Comments are closed.