Trends & Visions from the Embedded Vision Summit
From 1 to 3 May, experts, suppliers, users and investors met at the Santa Clara Convention Center
"What's hot?", Rudy Burger of Woodside Capital Partners summed it up. The industry is electrified by mega deals, such as the 15 billion US $ takeover of the Israeli automotive specialist Mobileye by Intel. It was all agreed that Computer Vision is at a decisive turning point, away from a niche technology, towards the mass product: "Vision for all!" Jeff Bier, founder of the Embedded Vision Alliance, organizer of the Embedded Vision Summit, has five current events Vision trends, which he believes will have a significant impact on industry and society over the coming years.
Trend 1: The flood of data
Digital cameras are becoming cheaper, ever better and are now ubiquitous - every smartphone alone has at least two of them. If you multiply the amount of data per image with the huge number of daily recorded images and videos and with the ever-increasing number of cameras, it becomes clear that image sensors will by far produce more data than all other types of sensors together.
One trend, Michael Tusch, CEO of Apical, states: In 2016, approximately 120 million IP-capable cameras (such as surveillance cameras) have been installed worldwide. If only these cameras stream their HD video data online with a data rate of 10 Mbps each, this would trigger a data tsunami of about 400 exabytes per month (1 exabyte = 1 billion gigabytes). For comparison: The total global IP traffic is currently "only" about 100 exabytes per month. Tusch is therefore convinced that in the future only the fewest IP cameras will stream their data. Instead, intelligent strategies for data condensation are required.
Trend 2: Deep Learning
Many times dead, but deep learning algorithms such as CNNs (Convolutional Neural Networks) can not only identify people and objects on images with unprecedented accuracy, but also understand the relationships and interactions between them. Combining this with the abundance of comrades from Trend 1 results in unimagined possibilities.
Trend 3: 3D Sensing
Camera technologies such as time-of-flight, structured light, or stereovision enable computers not only to see the world around them, but also to recognize structures and scales, such as: How far is something removed?
How big is it? Already in the near future, smartphones should have such 3D cameras and thus become potential 3D scanners. This leads directly to trend no. 4.
Trend 4: SLAM
SLAM (Simultaneous Location and Mapping) simply means algorithms that enable an intelligent device to construct a 3D map of its environment while simultaneously determining its own position in it. This technology is e.g. Essential for robots or self-propelled cars, in order to safely navigate and act in their environment. Jeff Bier is convinced: "SLAM will be the next GPS."
Trend 5: The Edge Revolution
Amazingly enough, the technologies already described are already available to us - at least in large computers and in the cloud. But even more fascinating is the prospect that all of this should be possible in the near future also "on the edge", i. In intelligent devices at the ends of the data network, e.g. In smartphones, wearables or in embedded processors in the vehicle.
In this context, Jeff Bier predicts that the cost and power consumption of computer vision applications will be reduced by a factor of a factor of 1000 in just three years, and it is roughly dividing this progress into 10 times more effective algorithms, 10 times more specific processors and 10 And more powerful frameworks and tools (such as compilers): 10x10x10x1000. This will create entirely new devices and applications that will be safer, more autonomous, easier to use and more intelligent than we can imagine today.
Presentations from the Embedded Vision Summit can now be downloaded free of charge under the link below. The next Summit in Santa Clara will take place from 22 to 24 May 2018.