Edge or Cloud: It’s All Embedded
"It’s clear that we’ve crossed a critical threshold.”
From May 21 to 24, the Santa Clara Convention Center again hosted the annual Embedded Vision Summit, gathering over 1,000 computer vision experts, users and investors from all the world. inspect met Jeff Bier, founder of the Embedded Vision Alliance and organizer of this particular event.
inspect: Jeff, what was your overall impression of this year´s Embedded Vision Summit? Compared with past Summits, what was new, what was different?
Jeff Bier: From the presentations, exhibits, and attendees at this year’s Embedded Vision Summit, it’s clear that we’ve crossed a critical threshold, such that it is now feasible to deploy vision in many applications that were previously not possible. It’s exciting to see the creativity of engineers and business people who are applying this technology to solve important problems, from monitoring the health of elderly people to optimizing the production of large greenhouses.
Which major trends have you identified that will drive and accelerate the deployment of embedded vision systems in the near future?
Jeff Bier: At this year’s Embedded Vision Summit, I observed four interesting trends:
- Of course, algorithms are central to computer vision – algorithms are what transforms pixels into actionable information. Lately, algorithms have been improving rapidly, thanks to deep learning, and this is enabling many new vision applications.
- Whether based on deep learning or other techniques, vision algorithms are very computationally demanding. For many applications, it’s critical to have processors that can deliver enormous processing performance at low cost and power consumption. Here again, the rate of improvement has accelerated dramatically.
- Similarly, now that 3D imaging sensors are being adopted in high-volume applications like mobile phones, the cost, size and power consumption of these sensors are all dropping fast. This means that many applications can now incorporate 3D vision.
- Cloud computing is playing a growing role in vision. Even for edge devices, in many case developers have the option of doing some or all of their vision processing in the cloud. This can significantly simplify product development for some types of applications.
Deep learning seems to rock the arena. In which fields of application is it already commonly used today, and which applications do you expect to come next?
Jeff Bier: Deep learning is being used to some extent in most vision application domains today. According to the Embedded Vision Alliance’s Computer Vision Developer Survey, developers in security/surveillance and automotive markets are most likely to be using deep learning. More mature industries such as medical and industrial equipment are less likely to be using deep learning – but even in these segments, we find over one third of developers are using deep learning.
During the Summit, Intel´s new 3D camera was awarded Best Camera of the Year. Why is 3D vision becoming such an important factor in vision systems?
Jeff Bier: We live in a three-dimensional world, and for many applications, such as mobile and stationary robots, understanding the world in three dimensions is critical. We can often infer 3D information from 2D sensors, but having 3D sensors can significantly simplify things.
Deep learning and 3D vision, among others, require a lot of computational power. What progress can we expect from processors, algorithms, frameworks etc.?
Jeff Bier: In many technology domains, we are accustomed to a slow, steady progress which, over a period of years, yields big improvements. But in some fields, at some points in time, technology advances very quickly. We’re now experiencing a period of very rapid improvement in processors, sensors, algorithms and development tools for computer vision applications. For example, I estimate that we will see improvements of over 1,000x in performance/cost and performance/power for deployment of deep learning inference within just two or three years, thanks to improvements in algorithms, software tools and processors. This kind of rapid improvement really changes the game in terms of what can be done practically and economically.
We have learned that embedded vision does not necessarily mean edge computing but may also include cloud-based solutions. What benefits does the cloud offer compared to the edge, and vice-versa?
Jeff Bier: Edge and cloud computing both have significant advantages. Cloud computing offers great resources to help developers quickly and economically create, test, deploy and scale applications. In other words, it’s easier to create solutions using the cloud. On the other hand, edge processing can achieve fastest response times, better privacy, and no recurring costs for data transmission and computation.
The Embedded Vision Alliance was founded in 2011. How many member companies have been included by now and what type of companies are these? What services do you offer for your members and how can the Alliance help to support their business development?
Jeff Bier: The Embedded Vision Alliance now comprises 85 member companies. For most of our seven-year history, we have focused our membership benefits on companies that provide building-block technologies, such as processors, algorithms and camera modules. The Alliance helps these companies connect with new customers and partners, and to get early insights into key technology and market trends. More recently, the Alliance has established a new membership class for companies designing systems and solutions. The Alliance helps these companies accelerate the incorporation of vision into their products, for example by providing training for technical and business people, and by helping them find the best technologies and suppliers to meet their needs.
downloaded from the Embedded Vision Alliance´s website. The next Embedded Vision Summit will take place from May 20 to 23, 2019, again in Santa Clara.