Machine Vision

Embedded, PC, Cloud?

What Comes Next for Machine Vision?

09.01.2015 -

For industrial production the terms PC, embedded and cloud stand for technologies that are crucial to the intelligent interaction of systems and components in an even more efficient production process. The systematic advancement of these technologies could drastically alter the Vision-based problem solutions in the field of automation. What does this mean for the planners, developers and users? Do we have to gear up for a rapid technological revolution?

inspect: What kind of great changes in hardware is industrial image processing facing and will the technological developments have an evolutionary or rather a revolutionary character?

K. Borgwarth: This year we discovered two basic trends; one was to be foreseen and that is that connectivity through FireWire disappears. The alternative inexpensive, fast and reliable solution is USB. The second trend was completely unexpected; we have already been offering computers with very high performance for machine vision for a couple of years, but this year the demand for these systems multiplied several times. Now we understand that this is driven by the chip manufacturers. The resolution is continually increasing; the data rates are kept constant or increase slightly; so the amount of data increases significantly. It is not the case anymore that processors are continually getting faster - the architecture can be kept constant. It is more that strong CPUs are accompanied by a second strong CPU, or a strong CPU is accompanied by a graphic card for additional computing power, or it's accompanied by FPGAs. So this year we are at a transition stage where the architecture inside the systems is affected by a revolutionary step.

inspect: Where will we find the computation-intensive part of industrial image processing in the future; on the sensor, closer to the sensor or will it be outsourced to the cloud?

T. Evensen: That's an interesting question. Where I see the trend is that the intelligence is sort of pushed in both directions; closer to the sensors and also out to the Cloud, and for very different reasons. You do want to be close to the sensors and one of the reasons is that you want to avoid sending uncompressed data all the way up to the Cloud. That takes way too much bandwidth. So you want to do the compression and the image recognition and things like that locally for bandwidth reasons. But the other big reason why you want to do things there is because of latency. You just can't wait until you get an answer back from the Cloud, like when you do sorting, for example. If you have to wait for the Cloud to direct in which direction a packet should go it would take too long. So for latency reasons you need local intelligence. On the other hand, there are many situations where you have lots of data from different sensors - searching, pattern recognition and that kind of thing - where you push that kind of intelligence up into the Cloud.
K.-H. Noffz Well, I disagree a little bit, because at least for our industry, I don't fear that the image reconstruction or image processing will be in the Cloud, just to address the argument with the latency. We have a real-time demand here, so I see a leading trend for embedded processing, because what you can see over the last few years is a dramatic increase in sensor size and sensor speed. Colour processing is more prevalent and we definitely have a high demand for increased processing power. I believe more will be done very close to the sensor and must be done by strong and very smart processing in embedded devices, so I see a trend towards smart systems.

inspect: Machine Vision does not work without sensor technology. How could changes in hardware architecture affect the future design of cameras and sensors and will new cameras and sensors be the future source of a tremendously increasing amount of data?

D. Ley: With regard to the performance aspects of future applications, we have been focusing over the last couple of years particularly on lowering the total cost of ownership. So we have been trying to make leaner products in such a way that there is no highly costly architecture in the system that is not needed. If we look at the mobile devices of today and recognise their performance, then we can very well imagine in the future that many of the PC-based architectures of today may move over toward more embedded kind of architectures with, for example, ARM-based processors. These will enable our customers to reduce the cost of the vision system architecture dramatically from around €3,000 to around €500. And this is a dramatic benefit for many of our current customers who are interested in getting the image as lean as possible but also in getting the image processed as lean a way as possible. So we think that there is not only the performance aspect to consider - in terms of higher bandwidth, higher frame rates and higher resolutions - but it's also the question of how you can manage a certain amount of data in the leanest possible fashion to make the whole machine vision solution as affordable as possible and to allow the vision technology to go into as many applications as possible. We think that embedded technology will be a big enabler in that direction.

K.-H. Noffz: A lot of very important things have already been mentioned so I won't address the hardware aspect anymore. But from my point of view, for the future success of embedded systems, the key issue and the most important thing is the software. How are we able to handle more complex hardware systems in a simple software way? In my opinion, machine vision is already a long way down that road because I think it is understood by the industry that we have a lot of initiatives by standards in the software app specs that allow you to use devices from different vendors with the same software interface, and this is an enormous driver. I believe this will become a key question for many devices: Will the software of vendor A work in the camera or in the device of vendor B?

inspect: Does Moore's Law mean that „the sky is the limit" for chip development and that more image data will be processed faster and cheaper after each step of development?

T. Evensen: Yes, Moore's Law - It's interesting, people have proclaimed that it's been dead for a long time and yet it still seems to be going strong. But it is definitely true that development is slowing down a little. It used to be the case that you reliably say that transistor density doubled every two years or so. What they're talking about is that you're trying to get more transistors on a certain area. But along with that comes the expectation that there will be less power consumption, they will be cheaper and so on. Nowadays it seems that when we go from one node to another it's very hard to get the performance, the power and the lower cost - especially the cost is very, very difficult to achieve. So what I see is rather a generic system way of thinking so that you get the performance out of the whole system as opposed to just relying on transistors becoming faster and smaller. So this will bring new architectures, where you have specialised systems for specialised problems. I think that's probably the reason Moore's Law seems to be slowing down a little. Xilinx has always been at the leading edge of looking at 28 nm, 20 nm and 16 nm, and now we have already started working on 10 nm. But it's becoming more and more expensive and you have to look at the underlying architecture, how you program the software on top of it, having specialised execution units. So I think that for some time to come, you will be able to actually get that improvement of total performance, but not just from transistors per se.

inspect: What will happen to industrial image processing if the "Internet of things" then called industrial internet, or „Industrie 4.0" in Germany, takes over the factory?

P. Altpeter: In my opinion, the Internet of Things will help to control all relevant data concerning an image processing system. At first the production data in the production field. Normally you have to catch an object batch, recognise it in the camera system for analysis and this data will come from the Internet of Things to the camera system. The second aspect is the parameters and limits. Often you find the parameter and limit files directly inside the camera today, with a secure connection to the Internet of Things. It is quite feasible that the camera systems download their parameters or even the program itself. That's often coded in the system itself at the moment and, after processing, there are often some measurement data or the images to store and, with the Internet of Things, it is possible to store all that data back in the system. That's not all futuristic - much is already done today, such as gathering that data from the POC, writing the measurement data back to a database is also often done. But there are currently many limitations with regard to images. Large images cannot be stored on the server for long periods - up to ten years in the medical branch - so in the future I think all these parameters will be handled from a centralised ARM system or maybe a decentralised system in the Cloud.

D. Ley: We think that the Internet of Things means more like enabling a manufacturing line to produce more customised products, to customise from one item to the next. If I look back to the history of Basler and inspection machines, then most of these machines were engineered to do a standard task, where the inspection task was always the same. And we think that in the future if there is a kind of mass customisation taking place where one product looks different from the next, then this means for the vision solution that it needs to be much more flexible, that configurations, illuminations, areas to be inspected and surface parameters need to be changed. So the system needs to be much more flexible than in the past to deal with all these potential changes in the object that needs to be tested. That brings many challenges both on the hardware side to put the right denomination through, to have the right field of view, but also on the software side. Then maybe we'll be exchanging algorithms on the fly, to make sure that you really have the appropriate inspection function available in the system when it's needed. This requires much more advanced vision technology. This is good news because it requires new development.

inspect: The various different components of Vision solutions in the Industrial Internet will have to interact intelligently in the network. Will machine vision systems such as those that are used today soon disappear and what will software solutions for industrial image processing be like in the future?

K.-H. Noffz: I definitely don't think such systems will disappear. I think the ratio between hardware and software will shift more to the software side and I think that's a valid trend. I would expect that system integrators will have to handle much more complex and integrated software systems to be much more integrated in their whole production process. So to be able to interact with the complete production, maybe this is the point where we have the Cloud again. And, on the other hand, because the devices will become smarter and more intelligent, we will have much more capability and flexibility in the devices. So from our experience in our business, we are able to change the hardware very flexible by reprogrammable chips inside the devices, inside the cameras, providing completely different image procession functions, adapting to a different lighting environment and controlling the sensor very differently.

Inspect: At some point the cloud will be a component of really big solutions. But the cloud itself is nothing more than a very large piece of hardware that is firmly anchored some place on earth. However, this hardware will be less directly under the control of the users as should be the case for a modern industry PC. What will happen? Goodbye data security! - Will the transparent factory enter the Cloud?

K. Borgwarth: The Cloud by itself is nothing new. If we look to industries like the semiconductor industry, there has been a 'cloud' present for 25 years. Different departments gather their data, put it into one Cloud and extract additional information out of the combination of this data. Something similar is going on in the automotive industry, all within one company. And now the new aspect is that working in separate companies having different players in different organisations, the data shall be shared between them. That's a very interesting subject: how can this work? The interests are different, the players are different and you have to apply - that's the basic point of your question - safety and security to the data. You might have recently seen some very prominent scandal about pictures and the Cloud. There were some very private photos in the iCloud, as some celebrities uploaded them into the iCloud of Apple using a password like "1234" and then found them in the public Cloud. Well, what you saw there can easily happen within companies as well. It's not just a small scandal in the newspaper; it's about securing the IP at the core of the companies. It's the root of their existence which is at issue here. We have seen that the preconditions for "Industrie 4.0" for the Cloud are now slowly being integrated into companies, bringing firewalls into the production zones to make sure that the IP is really protected. I believe there really are a lot of preconditions to be established before "Industrie 4.0" can really become a mass phenomenon in production.

There are other sectors outside the industry where you will already find examples of the Cloud. Let me give you one example: it is about traffic control. When you have cameras adjusting to the traffic on the roads, they can detect what kind of vehicle is travelling there, what is printed on the number plate, how many people are in the vehicle. In some countries, like Great Britain, these data are sent to the Cloud. Then you have different interested parties: for example, the road company. They want to collect money from the haulage companies passing by. So they look for the number plate and the category of vehicle. You have other players like, for example, navigation software companies. They want to get the average speed of vehicles on the road at the current moment; a completely different question, and the data have to be split up for trucks and passenger vehicles. Then maybe you have security agents, the police. They are searching for a specific vehicle - is it currently on that road? You have one set of measurements and you have different roads, all accessing the same data and interpreting them at different times.

There will be software companies - software as a service - gathering data out of the raw information which is available, and the key issue will be to make sure that people accept the existence of such data - it could be a question of people's privacy and people's habit - and it's a question concerning the IT infrastructure: are the data really secure or easily accessible by everybody and non-intentional access is possible?

Inspect: If the industrial Internet becomes reality, how can machine vision technology and enterprise resource planning be part of the same entire solution?

P. Altpeter: An efficient enterprise resource planning system maps all processes of a business and a very important part of that process is quality control, online and offline. Most camera systems that do not serve as a kind of position detector can be seen as a quality control instance and often divide the flow of material into two routes; accepted products and rejected products. So I think the integration of the camera into the resource planning is almost mandatory. Even today, good or bad results from the camera flow back to the enterprise resource planning system or go into a database, one way or another. But I think in the future the enterprise resource planning system will take over the master role, over the camera. Today the camera systems that we build have large data storage themselves for ring buffers of images or for measured data. I think in the future it could be that the ERP system pushes the parameters and the functions of control to the system and takes back these data sets from the camera. It's not a degradation of the camera system but mainly a way back to the master function. The camera is concentrating on acquiring and processing images and not handling data or having a nice GUI or something like that.

inspect: Which players in the machine vision market will benefit from new technologies and which will lose out? And will there be new players in the machine vision market?

D. Ley: Well, I think there are always big opportunities associated with change and therefore I'm personally convinced that those companies who are eager to organise and drive the change will be those who most benefit from this change process. We at Basler have one firm belief: if there is the possibility for making things better, leaner or more affordable, this change will happen sooner or later. So the only question is: who is organising the change and how long will it take exactly? So I guess that especially those companies who confront this technology and these new possibilities and change them, implement them in real-world products will be benefiting from the new times that are ahead of us.

inspect: A well-known IR camera manufacturer recently launched a module on the market, which along with an iPhone 5, will constitute a decent IR camera system accessible for all. This represents the first fusion of a previously professional product with a consumer product and we should take this seriously. Will there be more to follow? What opportunities do the technological developments that are mentioned here offer for new business models?

K.-H. Noffz: The business models will definitely become more widespread, more complex and more interesting. For us, around ten years ago our business model was very easy - sell hardware at the list price - and that was straightforward. But now with our software, for example, we have products that run with third-party hardware. You have a completely different situation; software that runs in someone else's camera, and you don't sell the hardware, even though there is a lot of value in it. So you move to some kind of run-time licensing and contracting, which is completely different. But now the situation is that we have customers using our software in a third-party camera and developing highly advanced algorithms and applications that can be used throughout a complete industry. So they are themselves interested in being a part of the business model and selling to third parties. Then the next question arises of how you can technically enable something for your customers in your own software on a third-party device. There you can see the situation can become complex, because embedded devices require ever more different players working to a common goal. So we don't have one vendor that provides hardware, the software and everything else. The business models become more complex but I think, and that's my experience, you have a lot of new opportunities if you have good ideas, and I believe we are really at the beginning of a new trend and the future has just started. So let's use the opportunities.

T. Evensen: So definitely things are changing pretty fast. We, of course, produce hardware, but it's programmable hardware. Traditional people program that with hardware languages. Now of course more software programmers are getting access to it. In the software world, a lot of things have happened outside of machine vision, and I think it's going to come in here as well; things like Open Source, things like the whole App way of thinking that we have from Apple. If you start thinking about Open Source and writing applications - apps - suddenly it's a different mindset. It becomes less proprietary, it becomes more like working together, finding solutions that are out there, sometimes even for free. So it becomes more like: how do you put together a lot of complex software to make it work together with the hardware? I think the whole integration part will be harder and harder to solve while getting the algorithms and so on, a lot of them becoming Open Source.

D. Ley: We see also at Basler a lot of new opportunities coming. Having been almost completely focussed on industrial customers ten years ago, we find most of our new customers not in the typical factory floor or automation environment. Many of our customers are using these cameras in applications that go closer to the end user, say in reverse vending machines, ticketing machines and other applications like that. That means that the end customer changes from a typical engineer towards someone who is more interested in getting visual data delivered in the easiest possible fashion. This whole area of 'prosumer', of professional consumer applications, is also getting closer to end users who are no longer just looking for the factory application. This is a big, big opportunity again for the companies that can make use of the almost universal applicability of our technology. So I think, looking ten years down the road, what will this vision show here have to offer? Certainly also products that go closer to the end user, that go into applications we have not been thinking of ten years ago.
K. Borgwarth: We observe a very interesting phenomena that machine vision comes more from the very well-defined conditions of automation and getting much closer to the people. In Silicon Valley you can see a lot of start-ups devoted to the subject, subtracting the background, being more flexible to the background and getting the same quality of data, working with arbitrary conditions and still observing the right object, coming to the same level of quality of results, without defining the background, not getting into the lab, getting much closer to reality. For sure this is nice for us because it also requires more computing power. This is what we have observed at the first small companies growing over the past few years. You can also see some examples of this in real life - Mercedes has brought this into series production so that when a car runs through an arbitrary landscape it is able to detect other human beings. These pixels belong together, "Oh, this is a cat, this is a standing tree", and stuff like that. Apply arbitrary conditions and it gets closer, that's the second point to what people really feel. It's not about so many millimetres or a number of pixels, it's more: what is the situation like? It's getting more personal, so machine vision really has a great future. The basic technology is present, well established and now it's moving increasingly into other applications in the field, not just to automation.

P. Altpeter: I believe what we talked about before is going in two directions. Some consumer products will emerge or will be used for image processing purposes on the one hand, and on the other hand it's going back to very efficient hardware for solving large images. So I think it's a great chance, for the customers as well, for getting good prices for affordable systems derived from consumer products. I think it's a chance for system integrators and the ones who build specialised hardware because, with all the new trends, you always need someone who helps you.

Contact

Basler AG

An der Strusbek 60-62
22926 Ahrensburg
Germany

+49 4102 463 500
+49 4102 463 109

Silicon Software GmbH

Konrad-Zuse-Ring 28
68163 Mannheim
Germany

+49 621 789507 0
+49 621 789507 10

Heitec AG

Brunnenstr. 36
74564 Crailsheim
Germany

+49 7951 936623
+49 7951 936666

Pyramid Computer GmbH

Bötzinger Str. 60
79111 Freiburg

+49 761 4514 0
+49 761 4514 700

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier