Artificial Intelligence on the Edge for Embedded Systems

2-days hands-on workshop on how to develop an embedded machine learning application

AI-ML/Machine learning algorithms demonstrate immense power and are replacing hand-crafted solutions in many domains: from robotics to home automation, from safety systems to industrial applications.
The paradigm of AI architecture is shifting towards distributed-AI ecosystems which bring advantages in terms of energy consumption, functional safety, cyber security and privacy. All in one: embedding artificial intelligence adds competitive advantage to your product.
An increasing number of AI-tasks, once handled by cloud or offline software applications, are directly managed on “the edge” that is: directly by the electronic system or sensor on the field.
The spectrum widens further when the AI system must be capable to deal with “non-structured” data.



Edge-ML training aims at enabling you and your team to exploit your product’s potentials by adopting a data-driven strategy. The workshop will give you the capabilities you need to avoid pitfalls and prevent from carry over.
During the workshop you will develop a complete embedded-machine-learning application from scratch: starting from the architecture design phase, moving on to the model training phase and then deploying the application to the edge for testing.
During each phase you will get to know the most relevant theoretical aspects to be prepared to start your own project.


The course is aimed at embedded software engineers and software development teams willing to start developing embedded machine learning solutions. Knowledge of eclipse-based development environments, C/C++ coding language are recommended, along with a basic understanding of python language.


In this 2-days hands-on workshop you will experience the process of developing an embedded machine learning application: you will face the main challenges the development team might run into and understand the choices that lead to designing a robust solution.
Part of this training will be based upon the real platform “Vivaldi”, an AI-based Sound recognition platform, developed at Bluewind.


The attendees will have a deep knowledge of all practices and tools in use today for developing and deploying machine learning algorithms on embedded devices. The training will provide different approaches by exploiting a set of already tested solutions. The offline model design method will also offer extensive knowledge about how to create a meaningful dataset and pose solid foundations for a robust application to be deployed.
The attendees will learn the usage of the available development tools (both offline and online) and will be invited to adopt a data-driven mindset for their next project.


Module 1 (Introductory, 1 hour)

Introduction and domain definition following the outline items listed above.
In this module the participants will get familiar with the vocabulary.
This module will focus on providing the domain of applications of edge AI solutions. Moreover a description of the main benefits arising from the technology will be given.

Lesson 1:ML Introductory
  • Introductory outline
  • example applications
  • requirements for hands on workshop

Module 2 (Day1, 4 hours)

Vivaldi application hands on.
The development process of an audio classification application very small footprint MCUs will be given. The participants will understand the main steps to follow in order to integrate edge AI solutions within the embedded project.
The users will get their hand dirty by installing and testing the tools required for development.

Lesson 2: AI on the edge
  • dataset collection
  • offline feature extraction
  • neural network choice
Lesson 3: Hands on Vivaldi Example
  • on device feature extraction
  • ANN integration with embedded software tools
  • Optimization

Module 3 (Day2, 4 hours)

Embedded gesture recognition for industrial automation.
The development process of gesture recognition system based on a very small footprint MCU.
This example of machine vision demonstrates the potential of image processing at the edge.
The users will get their hand dirty by installing and testing the tools required for development.

Lesson 4: new paradigm of H2M interaction
  • remote control of industrial equipment
  • matching voice and gesture control
Lesson 5: prototyping a gesture recognition embedded device
  • On-device feature extraction
  • ANN integration with Tensor Flow Lite for MCUs
  • Performances comparison