Deep Learning at the Edge Simplifies Package Inspection – Vision Systems Design
By Brian Benoit, Senior Manager Product Marketing, In-Sight Products, Cognex
Machine vision helps the packaging industry improve process control, improve product quality, and comply with packaging regulations. By removing human error and subjectivity with tightly controlled processes based on well-defined, quantifiable parameters, machine vision automates a variety of package inspection tasks. Machine vision tasks in the packaging industry include label inspection, optical character reading and verification (OCR/OCV), presence-absence inspection, counting, safety seal inspection, measurement, barcode reading, identification, and robotic guidance.
Machine vision systems deliver consistent performance when dealing with well-defined packaging defects. Parameterized, analytical, rule-based algorithms analyze package or product features captured within images that can be mathematically defined as either good or bad. However, analytical machine vision tools get pushed to their limits when potential defects are difficult to numerically define and the appearance of a defect significantly varies from one package to the next, making some applications difficult or even impossible to solve with more traditional tools.
In contrast, deep learning software relies on example-based training and neural networks to analyze defects, find and classify objects, and read printed characters. Instead of relying on engineers, systems integrators, and machine vision experts to tune a unique set of parameterized analytical tools until application requirements are satisfied, deep learning relies on operators, line managers, and other subject-matter experts to label images. By showing the deep learning system what a good part looks like and what a bad part looks like, deep learning software can make a distinction between good and defective parts, as well as classify the type of defects present.
Not so long ago, perhaps a decade, deep learning was available only to researchers, data scientists, and others with big budgets and highly specialized skills. However, over the last few years many machine vision system and solution providers have introduced powerful deep learning software tools tailored for machine vision applications.
In addition to VisionPro Deep Learning software from Cognex (Natick, MA, USA; http://www.cognex.com), Adaptive Vision (Gliwice, Poland; http://www.adaptive-vision.com) offers a deep learning add-on for its Aurora Vision Studio; Cyth Systems (San Diego, CA, USA; http://www.cyth.com) offers Neural Vision; Deevio (Berlin, Germany; http://www.deevio.ai) has a neural net supervised learning mode; MVTec Software (Munich, Germany; http://www.mvtec.com) offers MERLIC; and numerous other companies offer open-source toolkits to develop software specifically targeted at machine vision applications.
However, one common barrier to deploying deep learning in factory automation environments is the level of difficulty involved. Deep learning projects typically consist of four project phases: planning, data collection and ground truth labeling, optimization, and factory acceptance testing (FAT). Deep learning also frequently requires many hundreds of images and powerful hardware in the form of a PC with a GPU used to train a model for any given application. But, deep learning is now easier to use with the introduction of innovative technologies that process images at the edge.
Deep learning at the edge (edge learning), a subset of deep learning, uses a set of pretrained algorithms that process images directly on-device. Compared with more traditional deep learning-based solutions, edge learning requires less time and fewer images, and involves simpler setup and training.
Edge learning requires no automation or machine vision expertise for deployment and consequently offers a viable automation solution for everyonefrom machine vision beginners to experts. Instead of relying on engineers, systems integrators, and machine vision experts, edge learning uses the existing knowledge of operators, line engineers, and others to label images for system training.
Consequently, edge learning helps line operators looking for a straightforward way to integrate automation into their lines as well as expert automation engineers and systems integrators who use parameterized, analytical, rule-based machine vision tools but lack specific deep learning expertise. By embedding efficient, rules-based machine vision within a set of pretrained deep learning algorithms, edge learning devices provide the best of both worlds, with an integrated tool set optimized for packaging and factory automation applications.
With a single smart camera-based solution, edge learning can be deployed on any line within minutes. This solution integrates high-quality vision hardware, machine vision tools that preprocess images to reduce computational load, deep learning networks pretrained to solve factory automation problems, and a straightforward user interface designed for industrial applications.
Edge learning differs from existing deep learning frameworks in that it is not general purpose but is specifically tailored for industrial automation. And, it differs from other methods in its focus on ease of use across all stages of application deployment. For instance, edge learning requires fewer images to achieve proof of concept, less time for image setup and acquisition, no external GPU, and no specialized programming.
Developing a standard classification application using traditional deep learning methodology may require hundreds of images and several weeks. Edge learning makes defect classification much simpler. By analyzing multiple regions of interest (ROIs) in its field of view (FOV) and classifying each of those regions into multiple categories, edge learning lets anyone quickly and easily set up sophisticated assembly verification applications.
In the food packaging industry, edge learning technology is increasingly being used for verification and sorting of frozen meal tray sections. In many frozen meal packing applications, robots pick and place various food items into trays passing by on a high-speed line. For example, robots may place protein in the bottom center section, vegetables in the top left section, a side dish or dessert item in the top middle section, and some type of starch in the top right section of each tray.
Each section of a tray may contain multiple SKUs. For example, the protein section may include either meat loaf, turkey, or chicken. The starch section may contain pasta, rice, or potatoes. Edge learning makes it possible for operators to click and drag bounding boxes around characteristic features on a meal tray, fixing defined tray sections for training.
Next, the operator reviews a handful of images, classifying each possible class. Frequently, this can be done in a few minutes, with as few as three to five images for each class. During high-speed operation, the edge learning system can accurately classify the different sections. To accommodate entirely new classes or new varieties of existing classes during production, the tool can be updated with a few images in each new category.
For complex or highly customized applications, traditional deep learning is an ideal solution because it provides the capacity to process large and highly detailed image sets. Often, such applications involve objects with significant variations, which demands robust training capabilities and advanced computational power. Image sets with hundreds or thousands of images must be used for training to account for such significant variation and to capture all potential outcomes.
Enabling users to analyze such image sets quickly and efficiently, traditional deep learning delivers an effective solution for automating sophisticated tasks. Full-fledged deep learning products and open-source frameworks are well-designed to address complex applications. However, many factory automation applications entail far less complexity, making edge learning a more suitable solution.
With algorithms designed specifically for factory automation requirements and use cases, edge learning eliminates the need for an external GPU and hundreds or thousands of training images. Such pretraining, supported by appropriate traditional parameterized analytical machine vision tools, can vastly improve many machine vision tasks. The result is edge learning, which combines the power of deep learning with a light and fast set of vision tools that line engineers can apply daily to packaging problems and other factory automation challenges.
Compared with deep learning solutions that can require hours to days of training and hundreds to thousands of images, edge learning tools are typically trained in minutes using a few images per class. Edge learning streamlines deployment to allow fast ramp-up for manufacturers and the ability to adjust quickly and easily to changes.
This ability to find variable patterns in complex systems makes deep learning machine vision an exciting solution for inspecting objects with inconsistent shapes and defects, such as flexible packaging in first aid kits.
For the purposes of edge learning, Cognex has combined traditional analytical machine vision tools in ways specific to the demands of each application, eliminating the need to chain vision tools or devise complex logic sequences. Such tools offer fast preprocessing of images and the ability to extract density, edge, and other feature information that is useful for detecting and analyzing manufacturing defects. By finding and clarifying the relevant parts of an image, these tools reduce the computational load of deep learning.
For example, packing a lot of sophisticated hardware into a small form factor, Cognexs In-Sight 2800 vision system runs edge learning entirely on the camera. The embedded smart camera platform includes an integrated autofocus lens, lighting, and an image sensor. The heart of the device is a 1.6-MPixel sensor.
An autofocus lens keeps the object of interest in focus, even as the FOV or distance from the camera changes. Smaller and lighter than equivalent mechanical lenses, liquid autofocus lenses also offer improved resistance to shock and vibration.
Key for a high-quality image, the smart camera is available with integrated lighting in the form of a multicolor torchlight that offers red, green, blue, white, and infrared options. To maximize contrast, minimize dark areas, and bring out necessary detail, the torchlight comes with field-interchangeable optical accessories such as lenses, color filters, and diffusers, increasing system flexibility for handling numerous applications.
With 24 V of power, the In-Sight 2800 vision system has an IP67-rated housing, and Gigabit Ethernet connectivity delivers fast communication speed and image offloading. This edge learning-based platform also includes traditional analytical machine vision tools that can be parameterized for a variety of specialized tasks, such as location, measurement, and orientation.
Training edge learning is like training a new employee on the line. Edge learning users dont need to understand machine vision systems or deep learning. Rather, they only need to understand the classification problem that needs to be solved. If it is straightforwardfor instance, classifying acceptable and unacceptable parts as OK/NGthe user must only understand which items are acceptable and which are not.
Sometimes line operators can include process knowledge not readily apparent, derived from testing down the line, which can reveal defects that are hard for even humans to detect. Edge learning is particularly effective at figuring out which variations in a part are significant and which variations are purely cosmetic and do not affect functionality.
Edge learning is not limited to binary classification into OK/NG; it can classify objects into any number of categories. If parts need to be sorted into three or four distinct categories, depending on components or configurations, that can be set up just as easily.
To simplify factory automation and handle machine vision tasks of varying complexity, edge learning is useful in a wide range of industries, including medical, pharmaceutical, and beverage packaging applications.
Automated visual inspection is essential for supporting packaging quality and compliance while improving packaging line speed and accuracy. Fill level verification is an emerging use of edge learning technology. In the medical and pharmaceutical industries, vials filled with medication to a preset level must be inspected before they are capped and sealed to confirm that levels are within proper tolerances.
Unconfused by reflection, refraction, or other image variations, edge learning can be easily trained to verify fill levels. Fill levels that are too high or too low can be quickly classified as NG, while only those within the proper tolerances are classified as OK.
Another emerging use of edge learning technology is cap inspection in the beverage industry. Bottles are filled with soft drinks and juices and sealed with screw caps. If the rotary capper cross-threads a cap, applies improper torque, or causes other damage during the capping process, it can leave a gap that allows for contamination or leakage.
To train an edge learning system in capping, images showing well-sealed caps are labeled as good; images showing caps with slight gaps, which might be almost imperceptible to the human eye, are labeled as no good. After training is complete, only fully sealed caps are categorized as OK. All other caps are classified as NG.
While challenges for traditional rule-based machine vision continue to arise as packaging application complexity increases, easy-to-use edge learning on embedded smart camera platforms has proved to be a game-changing technology. Edge learning is more capable than traditional machine vision analytical tools and is extremely easy to use with previously challenging applications.
Read more from the original source:
Deep Learning at the Edge Simplifies Package Inspection - Vision Systems Design
- Machine learning-random forest model was used to construct gene signature associated with cuproptosis to predict the prognosis of gastric cancer -... - February 5th, 2025 [February 5th, 2025]
- Machine learning for predicting severe dengue in Puerto Rico - Infectious Diseases of Poverty - BioMed Central - February 5th, 2025 [February 5th, 2025]
- Panoramic radiographic features for machine learning based detection of mandibular third molar root and inferior alveolar canal contact - Nature.com - February 5th, 2025 [February 5th, 2025]
- AI and machine learning: revolutionising drug discovery and transforming patient care - Roche - February 5th, 2025 [February 5th, 2025]
- Development of a machine learning model related to explore the association between heavy metal exposure and alveolar bone loss among US adults... - February 5th, 2025 [February 5th, 2025]
- Identification of therapeutic targets for Alzheimers Disease Treatment using bioinformatics and machine learning - Nature.com - February 5th, 2025 [February 5th, 2025]
- A novel aggregated coefficient ranking based feature selection strategy for enhancing the diagnosis of breast cancer classification using machine... - February 5th, 2025 [February 5th, 2025]
- Performance prediction and optimization of a high-efficiency tessellated diamond fractal MIMO antenna for terahertz 6G communication using machine... - February 5th, 2025 [February 5th, 2025]
- How machine learning and AI can be harnessed for mission-based lending - ImpactAlpha - January 27th, 2025 [January 27th, 2025]
- Machine learning meta-analysis identifies individual characteristics moderating cognitive intervention efficacy for anxiety and depression symptoms -... - January 27th, 2025 [January 27th, 2025]
- Using robotics to introduce AI and machine learning concepts into the elementary classroom - George Mason University - January 27th, 2025 [January 27th, 2025]
- Machine learning to identify environmental drivers of phytoplankton blooms in the Southern Baltic Sea - Nature.com - January 27th, 2025 [January 27th, 2025]
- Why Most Machine Learning Projects Fail to Reach Production and How to Beat the Odds - InfoQ.com - January 27th, 2025 [January 27th, 2025]
- Exploring the intersection of AI and climate physics: Machine learning's role in advancing climate science - Phys.org - January 27th, 2025 [January 27th, 2025]
- 5 Questions with Jonah Berger: Using Artificial Intelligence and Machine Learning in Litigation - Cornerstone Research - January 27th, 2025 [January 27th, 2025]
- Modernizing Patient Support: Harnessing Advanced Automation, Artificial Intelligence and Machine Learning to Improve Efficiency and Performance of... - January 27th, 2025 [January 27th, 2025]
- Param Popat Leads the Way in Transforming Machine Learning Systems - Tech Times - January 27th, 2025 [January 27th, 2025]
- Research on noise-induced hearing loss based on functional and structural MRI using machine learning methods - Nature.com - January 27th, 2025 [January 27th, 2025]
- Machine learning is bringing back an infamous pseudoscience used to fuel racism - ZME Science - January 27th, 2025 [January 27th, 2025]
- How AI and Machine Learning are Redefining Customer Experience Management - Customer Think - January 27th, 2025 [January 27th, 2025]
- Machine Learning Data Catalog Software Market Strategic Insights and Key Innovations: Leading Companies and... - WhaTech - January 27th, 2025 [January 27th, 2025]
- How AI and Machine Learning Will Influence Fintech Frontend Development in 2025 - Benzinga - January 27th, 2025 [January 27th, 2025]
- The Nvidia AI interview: Inside DLSS 4 and machine learning with Bryan Catanzaro - Eurogamer - January 22nd, 2025 [January 22nd, 2025]
- The wide use of machine learning VFX techniques on Here - befores & afters - January 22nd, 2025 [January 22nd, 2025]
- .NET Core: Pioneering the Future of AI and Machine Learning - TechBullion - January 22nd, 2025 [January 22nd, 2025]
- Development and validation of a machine learning-based prediction model for hepatorenal syndrome in liver cirrhosis patients using MIMIC-IV and eICU... - January 22nd, 2025 [January 22nd, 2025]
- A comparative study on different machine learning approaches with periodic items for the forecasting of GPS satellites clock bias - Nature.com - January 22nd, 2025 [January 22nd, 2025]
- Machine learning based prediction models for the prognosis of COVID-19 patients with DKA - Nature.com - January 22nd, 2025 [January 22nd, 2025]
- A scoping review of robustness concepts for machine learning in healthcare - Nature.com - January 22nd, 2025 [January 22nd, 2025]
- How AI and machine learning led to mind blowing progress in understanding animal communication - WHYY - January 22nd, 2025 [January 22nd, 2025]
- 3 Predictions For Predictive AI In 2025 - The Machine Learning Times - January 22nd, 2025 [January 22nd, 2025]
- AI and Machine Learning - WEF report offers practical steps for inclusive AI adoption - SmartCitiesWorld - January 22nd, 2025 [January 22nd, 2025]
- Learnings from a Machine Learning Engineer Part 3: The Evaluation | by David Martin | Jan, 2025 - Towards Data Science - January 22nd, 2025 [January 22nd, 2025]
- Google AI Research Introduces Titans: A New Machine Learning Architecture with Attention and a Meta in-Context Memory that Learns How to Memorize at... - January 22nd, 2025 [January 22nd, 2025]
- Improving BrainMachine Interfaces with Machine Learning ... - eeNews Europe - January 22nd, 2025 [January 22nd, 2025]
- Powered by machine learning, a new blood test can enable early detection of multiple cancers - Medical Xpress - January 15th, 2025 [January 15th, 2025]
- Mapping the Edges of Mass Spectral Prediction: Evaluation of Machine Learning EIMS Prediction for Xeno Amino Acids - Astrobiology News - January 15th, 2025 [January 15th, 2025]
- Development of an interpretable machine learning model based on CT radiomics for the prediction of post acute pancreatitis diabetes mellitus -... - January 15th, 2025 [January 15th, 2025]
- Understanding the spread of agriculture in the Western Mediterranean (6th-3rd millennia BC) with Machine Learning tools - Nature.com - January 15th, 2025 [January 15th, 2025]
- "From 'Food Rules' to Food Reality: Machine Learning Unveils the Ultra-Processed Truth in Our Grocery Carts" - American Council on Science... - January 15th, 2025 [January 15th, 2025]
- AI and Machine Learning in Business Market is Predicted to Reach $190.5 Billion at a CAGR of 32% by 2032 - EIN News - January 15th, 2025 [January 15th, 2025]
- QT Imaging Holdings Introduces Machine Learning-Enabled Image Interpolation Algorithm to Substantially Reduce Scan Time - Business Wire - January 15th, 2025 [January 15th, 2025]
- Global Tiny Machine Learning (TinyML) Market to Reach USD 3.4 Billion by 2030 - Key Drivers and Opportunities | Valuates Reports - PR Newswire UK - January 15th, 2025 [January 15th, 2025]
- Machine learning in mental health getting better all the time - Nature.com - January 15th, 2025 [January 15th, 2025]
- Signature-based intrusion detection using machine learning and deep learning approaches empowered with fuzzy clustering - Nature.com - January 15th, 2025 [January 15th, 2025]
- Machine learning and multi-omics in precision medicine for ME/CFS - Journal of Translational Medicine - January 15th, 2025 [January 15th, 2025]
- Exploring the influence of age on the causes of death in advanced nasopharyngeal carcinoma patients undergoing chemoradiotherapy using machine... - January 15th, 2025 [January 15th, 2025]
- 3D Shape Tokenization - Apple Machine Learning Research - January 9th, 2025 [January 9th, 2025]
- Machine Learning Used To Create Scalable Solution for Single-Cell Analysis - Technology Networks - January 9th, 2025 [January 9th, 2025]
- Robotics: machine learning paves the way for intuitive robots - Hello Future - January 9th, 2025 [January 9th, 2025]
- Machine learning-based estimation of crude oil-nitrogen interfacial tension - Nature.com - January 9th, 2025 [January 9th, 2025]
- Machine learning Nomogram for Predicting endometrial lesions after tamoxifen therapy in breast Cancer patients - Nature.com - January 9th, 2025 [January 9th, 2025]
- Staying ahead of the automation, AI and machine learning curve - Creamer Media's Engineering News - January 9th, 2025 [January 9th, 2025]
- Machine Learning and Quantum Computing Predict Which Antibiotic To Prescribe for UTIs - Consult QD - January 9th, 2025 [January 9th, 2025]
- Machine Learning, Innovation, And The Future Of AI: A Conversation With Manoj Bhoyar - International Business Times UK - January 9th, 2025 [January 9th, 2025]
- AMD's FSR 4 will use machine learning but requires an RDNA 4 GPU, promises 'a dramatic improvement in terms of performance and quality' - PC Gamer - January 9th, 2025 [January 9th, 2025]
- Explainable artificial intelligence with UNet based segmentation and Bayesian machine learning for classification of brain tumors using MRI images -... - January 9th, 2025 [January 9th, 2025]
- Understanding the Fundamentals of AI and Machine Learning - Nairobi Wire - January 9th, 2025 [January 9th, 2025]
- Machine learning can help blood tests have a separate normal for each patient - The Hindu - January 1st, 2025 [January 1st, 2025]
- Artificial Intelligence and Machine Learning Programs Introduced this Spring - The Flash Today - January 1st, 2025 [January 1st, 2025]
- Virtual reality-assisted prediction of adult ADHD based on eye tracking, EEG, actigraphy and behavioral indices: a machine learning analysis of... - January 1st, 2025 [January 1st, 2025]
- Open source machine learning systems are highly vulnerable to security threats - TechRadar - December 22nd, 2024 [December 22nd, 2024]
- After the PS5 Pro's less dramatic changes, PlayStation architect Mark Cerny says the next-gen will focus more on CPUs, memory, and machine-learning -... - December 22nd, 2024 [December 22nd, 2024]
- Accelerating LLM Inference on NVIDIA GPUs with ReDrafter - Apple Machine Learning Research - December 22nd, 2024 [December 22nd, 2024]
- Machine learning for the prediction of mortality in patients with sepsis-associated acute kidney injury: a systematic review and meta-analysis - BMC... - December 22nd, 2024 [December 22nd, 2024]
- Machine learning uncovers three osteosarcoma subtypes for targeted treatment - Medical Xpress - December 22nd, 2024 [December 22nd, 2024]
- From Miniatures to Machine Learning: Crafting the VFX of Alien: Romulus - Animation World Network - December 22nd, 2024 [December 22nd, 2024]
- Identification of hub genes, diagnostic model, and immune infiltration in preeclampsia by integrated bioinformatics analysis and machine learning -... - December 22nd, 2024 [December 22nd, 2024]
- This AI Paper from Microsoft and Novartis Introduces Chimera: A Machine Learning Framework for Accurate and Scalable Retrosynthesis Prediction -... - December 18th, 2024 [December 18th, 2024]
- Benefits and Challenges of Integrating AI and Machine Learning into EHR Systems - Healthcare IT Today - December 18th, 2024 [December 18th, 2024]
- The History Of AI: How Machine Learning's Evolution Is Reshaping Everything Around Us - SlashGear - December 18th, 2024 [December 18th, 2024]
- AI and Machine Learning to Enhance Pension Plan Governance and the Investor Experience: New CFA Institute Research - Fintech Finance - December 18th, 2024 [December 18th, 2024]
- Address Common Machine Learning Challenges With Managed MLflow - The New Stack - December 18th, 2024 [December 18th, 2024]
- Machine Learning Used To Classify Fossils Of Extinct Pollen - Offworld Astrobiology Applications? - Astrobiology News - December 18th, 2024 [December 18th, 2024]
- Machine learning model predicts CDK4/6 inhibitor effectiveness in metastatic breast cancer - News-Medical.Net - December 18th, 2024 [December 18th, 2024]
- New Lockheed Martin Subsidiary to Offer Machine Learning Tools to Defense Customers - ExecutiveBiz - December 18th, 2024 [December 18th, 2024]
- How Powerful Will AI and Machine Learning Become? - International Policy Digest - December 18th, 2024 [December 18th, 2024]
- ChatGPT-Assisted Machine Learning for Chronic Disease Classification and Prediction: A Developmental and Validation Study - Cureus - December 18th, 2024 [December 18th, 2024]
- Blood Tests Are Far From Perfect But Machine Learning Could Change That - Inverse - December 18th, 2024 [December 18th, 2024]
- Amazons AGI boss: You dont need a PhD in machine learning to build with AI anymore - Fortune - December 18th, 2024 [December 18th, 2024]