close

Pushing the Boundaries of Computer Vision

©zapp2photo

At first glance, vision seems like it ought to be a fairly easy field for computers

High-definition cameras can detect details even our eyes can’t, and fast processors should make interpreting images simple. However, computer vision has long been a hurdle for computer science, and we’re only now on the verge of making significant progress in some areas. We tend to underestimate just how much information our eyes and brains have to interpret to achieve human-like vision.

A recent report by Tractica, predicts the computer vision hardware and software market will grow from $6.6 billion in 2015 to $48.6 billion annually by 2022. According to Anand Joshi, principal analyst at Tractica “the computer vision market remains ripe for innovation and open to the emergence of new applications as well as new industry participants”.

©Tractica

We often think of our eyes and brain as separate entities, but they’re more connected than many realize. The brain not only processes what our eyes receive, but it also fills in missing gaps and does work to detect and interpret motion. Even when we close one eye, the brain can still detect three dimensions of vision thanks to the parallax effect and by comparing the relative size of objects. Optical illusions demonstrate just how complex the eye-brain interaction is: Even when you know you’re looking at an optical illusion, the effect is impossible to ignore. All of this work is done in real time, and we don’t consciously perceive this effort. Because our brains do so much, and because our understanding of human vision is still limited, it’s impossible to replicate human-like vision with computers. Instead, we need to develop new types of vision.

Human-Computer Interactions

Voice assistants have become incredibly popular in just a few short years, and their popularity has convinced many people that in-home robots will become popular as well. However, robots have long struggled to handle unfamiliar environments. Computer vision could help robots build a 3D map of an environment and better interact with it. One of the major advances in computer vision has been facial recognition: Security systems, in particular, can identify people nearly instantly even in a large crowd. Facial recognition on robots lets them interact in a more familiar manner. For robots designed to assist the elderly, facial recognition will be potentially lifesaving.

Although augmented reality has occasionally been described as a bridge to true virtual reality, AR is actually more difficult to implement in some ways. Nevertheless, the technology has evolved rapidly in recent years, thanks in part to computer vision advances. At the core of AR is a challenge relevant to other fields of computer vision: Object recognition. Small variations in objects can prove challenging for imagine recognition software, and even a change in lighting can cause mismatches. Experts at Facebook and other companies have made tremendous progress through deep learning and other artificial intelligence fields, and these advances have the potential to make AR and other vision fields dependent on object recognition more powerful in the coming years.

Another transformative use-case is predicted to be agriculture. Agricultural science is charged with feeding the world, and computers have been making major strides in the field in recent years. Because farms are so large and often remote, image recognition enables individual farmers to be far more effective. Computer vision capable of detecting fruit can help farmers track progress and determine the right time for harvest. Perhaps even more important, new robots armed with computer vision capabilities are able to detect weeds, letting farmers spray herbicides only where they’re needed, thereby reducing costs and potentially environmental effects. The modern farm is a highly connected one, and computer vision will work with the Internet of Things and other technologies to achieve greater levels of efficiency.

Google owner Alphabet are hoping their DeepMind Health AI will one day be able to use computer vision to diagnose diseases. The company recently demonstrated the ability to predict eye diseases by analysing retinal scans of patients from Moorfields Eye Hospital in London.  DeepMind’s Dominic King told the Financial Times he believes they’re “going to make really tremendous progress in the next couple of years” in advancing the technology.

Minority Report

Security cameras have transformed urban areas, and it’s rare for people in a city center to ever be off camera. While facial recognition software has been around for some time, computer vision algorithms are being developed to detect crimes as they occur. New AI tools that rely heavily on computer vision can perform certain crime analysis tasks nearly instantly, letting investigators spend less time going through photographs. Artificial intelligence can’t match humans in terms of pattern recognition, but it can detect obscure connections people might miss. Computer sleuths won’t replace humans any time soon, but computer vision will provide invaluable tools to detectives on the front line of solving crimes.

Technical advances rarely occur discretely, and computer vision has been aided greatly by deep learning and other newer artificial intelligence technologies. However, the low cost of high-quality cameras and the ubiquity of mobile devices has spurred tremendous interest in the field, and technologies dependent on computer vision, including self-driving cars, have lead to huge increases in investment. What role computer vision will play in our daily lives is still unknown, but expect to interact with computers and robots capable of seeing in a somewhat human-like manner in the near future.

Tags : AIComputer VisionDeep Learningfeatured
Tweet
Share
Share
+1
Email
WhatsApp