Lately, there has been a good deal written about computer vision. So it might make sense to review some of its aspects and why everyone is writing about it.
First, it’s about sensors.
The point to remember is sensors are not like the human eye and brain. Different sensors produce different drawbacks/advantages, so using many of them may allow one to reduce drawbacks and combine advantages of the others.
For instance, a laser scanner will only detect the first obstacle. Similarly, bad measurements will be provided by an odometer if the road is slippery.
Computer Vision (CV) in a sense needs to be taught how to view the world like humans do with the limitations of its sensors, or they could be called senses. How can that be done?
Images are recorded by some input devices (eg. CCD, CMOS etc). And that is the key to accuracy, having these work as a self-checking system. A whole scanned scene can be generated by a sequence or set of images. For example, Computer Vision deals with light and its interaction with surfaces. So one can say optics plays a role in understanding a scene. Lenses, cameras, depth-of-field, focussing, binocular vision, sensor sensitivity, time of exposure, and other concepts from optics and photography are all relevant to Computer Vision’s accurate interpretation of this scene to be analyzed correctly.
Because of this, one can easily depict Computer Vision is the pursuit of emulating the human vision. Here is the difference. To us vision is a given. But in reality, we are processing around 60 images per second with millions of points (pixels) in each image. In fact, over half the human brain stays engaged in processing visual information.
Added to this, only the photoreceptors in the middle of the eye are sensitive to colour and there is a big blind spot in the retina where the optic nerve is connected, yet somehow we think we see a complete image, the recreation of this is quite a challenge. Clearly there is more going on than meets the eye. While our ultimate goal for Computer Vision is to emulate human vision is still a long way off, Computer Vision is consistently being applied to more complex applications.
Let’s start with a simple one. Vacuuming your living room. Roomba can bump into walls and avoid falling down the stairs. You could call it near-sighted Computer Vision. But if we want to build interesting interactive systems and sophisticated robots that can do something more engrossing like play a round of challenging games with you, then clearly we need more than simple sensors.
In response to this, computer scientists have urged the need to change how they teach their created systems. Scientists need to feed the system massive quantities of training data. This is why Computer Vision is so closely tied to data science, machine learning, artificial neural networks and enriched training data.
With these components and engineering point of view, Computer Vision is moving into the medical/biomedical sector. Though medical imaging is a hard task because of the false positive and false negative consequences, many successful applications have been built in this sector and potential research is continuing.
One reason is cost reduces drastically when Computer Vision is used and also it is tireless, precise, fast, able to do tedious tasks, work in uncomfortable environments and more. So Computer Vision also can be applied in numerous fields.
Considering all of that, we can say that Computer Vision is as limited and useful our eyes in everyday life. We can see a future not too distant in which Computer Vision will be a woven seamlessly in the fabric of the worldwide analytics, similar to the telecommunications infrastructure of today.
We just need to look at ourselves to teach computers how to see themselves.
Warning: Use of undefined constant php - assumed 'php' (this will throw an Error in a future version of PHP) in /home/u3l8rkb6k3a2/domains/codemen.com/html/wp-content/themes/cm/template-parts/content-single.php on line 29
Leave a Reply