September 6, 2022 9:00 AM
Image classification has been a core focus of deep learning for many years. However, many computer vision applications require knowing where objects are in an image and the ability to count the number of objects, which goes far beyond simple image classification. This is where object detection comes in.
Object detection models are capable of finding objects of interest in an image and provide us details about those objects, such as their classification, location, size, relative distance from the camera, etc. A handful of object detection models, such as MobileNet V2 SSD and YOLOv5, are optimized for low-power systems, including smartphones and single board computers. However, most microcontrollers are still incapable of running such models due to their processing and memory limitations.
Edge Impulse has developed a new technique named “Faster Objects, More Objects” (FOMO) that performs constrained object detection on low-power devices, such as microcontrollers. FOMO provides the location of target objects in an image, but it does not give arbitrary bounding box information about the size or distance of objects. As a result, it requires up to 30x less processing power and memory than MobileNet V2 SSD or YOLOv5. In this talk, we will describe object detection, how FOMO works, and provide a live demonstration of constrained object detection on a microcontroller.
This Vision Systems Design webinar will explore the role machine vision plays a key role in smart factories, where automated manufacturing lines will be able to self-adjust to maximize quality, output, and profitability.
Don't miss Edge AI Vision Alliance's virtual workshop to learn how the Sony Spresense and Edge Impulse's FOMO object detection algorithm make a compelling combination for computer vision workloads.