VISION 2018: “Training, Not Programming” with Deep Learning Solutions
- Demonstrating a complex inspection task as a deep learning application on an FPGA with a data rate of over 220 MB per second (image credit: Silicon Software)
- Screenshot of SuaKIT v2.0’s Function Image_Visual Debugger (image credit: SUALAB)
- An embedded vision system for identifying and counting people and vehicles via region-based convolutional neural networks (R-CNN; image credit: Neadvance)
Spurred by ever-increasing computing performance and methodological breakthroughs, deep learning has developed into a megatrend in the machine vision industry – one that will help shape VISION 2018. Florian Niethammer, project manager for VISION at Messe Stuttgart, sees these technical advancements as compelling reasons why this year’s event will be one of the most thrilling ever. “On 6-8 November in Stuttgart, it’s going to be tremendously exciting to see how exhibitors present a topic as talked-about as deep learning and link it to conventional machine vision, as well as embedded vision,” Niethammer affirms. “In line with our campaign motto, ‘BE VISIONARY’, we’re expecting a veritable explosion of new products and solutions that many wouldn’t have even imagined just two years ago at the last VISION.”
Representing a subdomain of machine learning and artificial intelligence, deep learning systems follow a technological approach that is fundamentally different from current machine vision techniques. “These systems employ neural networks,” explains Vassilis Tsagaris, a VISION exhibitor and the CEO of Irida Labs. “The term ‘deep learning’ refers to the typically large number of hidden layers such networks contain.” Systems based on deep learning structures are unique in their ability to “analyse huge volumes of digital image data, which makes it possible to train them to recognise models of certain objects” offers Dr Olaf Munkelt, managing director of MVtec Software GmbH and another veteran VISION exhibitor. “With the help of this training data, the classifier in question then learns how to differentiate between the classes specified.”
Flexible decision-making offers several advantages
“The strength of deep learning lies in how its approach can take more flexible decisions than the sets of predefined rules you find in conventional machine-vision systems,” points out Volker Gimple, who heads the machine vision group at Stemmer Imaging AG. “Deep learning offers an edge whenever you have test objects with large variations that make them difficult to model mathematically,” adds Dr.
Klaus-Henning Noffz, managing director of Silicon Software.
In other words, deep learning presents an alternative in cases where conventional machine-vision systems reach their limits. “The main challenges these systems face include changes in the optical conditions at hand, increasing product diversity, and the complexity of images themselves,” reports Hanjun Kim, a marketing manager at SUALAB. “Even in areas where machine vision has already been implemented, adding in deep learning can drastically boost the speed and accuracy of the inspection process.” SUALAB, a South Korean company specialised in software, will be exhibiting for the first time at VISION 2018; it plans to unveil SuaKIT v2.0, deep learning software it has developed for machine vision inspection.
A variety of applications already in use
Today, deep learning is already being incorporated into applications where machine vision handles the classification of the test object in question. Dr Noffz offers an example from automotive manufacturing: “With the help of deep learning, self-learning algorithms can detect every single tiny flaw in the paint – even those invisible to the naked eye,” he explains.
Meanwhile, the food and beverage industry – an area that has garnered increasing attention and earned its own label at VISION in recent years – is also benefiting from deep learning technologies. “Here, for example, it's possible to identify and inspect low-quality fruits and vegetables with significant precision before they’re packaged or processed further,” reports Dr Munkelt. Dr Christopher Scheubel, head of IP and business development at Framos (which has exhibited at VISION since its first event), also describes an application in which deep learning is being used to classify and sort packaging for a food retail company.
Deepsense, another company set to exhibit for the first time at VISION 2018, is preparing to present a solution for visual quality control that is excellent at inspecting objects with complex patterns (such as wood or textiles) without the need for tedious manual configuration. Robert Bogucki, chief science officer at Deepsense, also sees great potential in applying deep learning to the field of healthcare in the future.
Will deep learning supplement or supplant established Systems?
While a number of hurdles remain in the application of deep learning – the amount of time and effort necessary for execution and the training of neural networks, for example – companies like Framos are confident that deep learning will dominate virtually every method of classification (such as in quality assurance or sorting) in the medium term. Dr Noffz is also a believer. “By shifting the focus from programming to training such systems, deep learning can achieve widespread use. Tasks related to classification, for instance, are much easier to handle than with algorithmic techniques,” he explains. “Neural networks are especially suited to many other activities, as well, including those involving reflective surfaces, environments with inadequate lighting, moving objects, robotics, and 3D.” Portugal's Neadvance, another first-time exhibitor coming to VISION 2018, shares this view. “Areas where object recognition or object classification is the primary goal will definitely drift from traditional approaches to deep learning,” the company states. “Texture analysis, template matching, OCR, pose estimation, urban scene analysis and interpretation, and handwritten text recognition are some examples.”
Combining deep learning with traditional machine vision can nevertheless make sense when it comes to ensuring 100% classification, as Irida Labs’ Vassilis Tsagaris affirms. “It won’t be long before we start seeing more and more ‘hybrid’ systems,” he predicts. “Most of the time, you need both deep learning and computer vision algorithms that have proven to be robust.” Volker Gimple also believes that there are many fields in which conventional methods can hold their own thanks to a crucial characteristic that machine learning approaches typically lack: “The ability to track decisions – including flawed ones,” as Gimple puts it.
Deep learning on embedded devices
Meanwhile, deep learning can also be applied to embedded vision devices. “The widely used embedded board NVIDIA Jetson TX2 also supports the deep learning inference in MVTec’s HALCON software,” Dr Munkelt confirms. Particularly in Industry 4.0’s decentralised approach to computing, he is seeing increasing demand for embedded vision in connection with deep learning solutions in which small machine vision units or intelligent cameras can take on sophisticated aspects of various tasks. At VISION 2018, Silicon Software is planning to unveil a deep learning solution in its own VisualApplets environment on a field-programmable gate array (FPGA) Irida Labs will also present a similar link between the megatrends of embedded vision and deep learning at VISION 2018. Its DeepAPI framework – a library for implementing deep learning on any embedded device – can be used for product quality inspections even when a limited number of images are available.