Machine learning and artificial intelligence have been kicking around the lab for close to two decades, but real-life implementations are still few and far between
In large, that’s because businesses are still suffering from a chronic shortage of raw data, and the skills to interpret it effectively.
This hasn’t stopped companies from making serious investments in AI and automation technologies. A recent survey of close to 4,000 IT leaders across 84 countries found that more companies are starting to invest in AI and automation technologies, and they’ve been helped in this task by the growth in available data and improvements in compute and learning models. What lessons can other businesses take from these advances to drive their own AI and ML initiatives?
Data & deep learning
In the last year we’ve seen important progress in the development of data sets, hardware and software tools, and a culture of sharing and openness through conferences and websites like arXiv. Novices and non-experts have also benefited from easy-to-use, open source libraries for machine learning.
These open source ML libraries have levelled the playing field and have made it possible for non-expert developers to build interesting applications. It’s little wonder, then, that more companies are seizing the opportunity to build ML and AI into their systems and products.
Models are only one side of the coin, however. Many of the models we rely on, including deep learning and reinforcement learning, are data hungry. Since they have the potential to scale to many, many users, the largest companies in the largest countries have an advantage over the rest of us. It’s the reason why we’re seeing so much cutting-edge research coming out of the large U.S. and Chinese companies.
In a sense, AI is providing a solution to its own challenge by enabling organisations to generate labelled data sets. By augmenting human labellers with machine learning tools, organisations can help their human workers scale, improve their accuracy, and make training data more affordable. In certain domains, new tools like generative adversarial networks (GANs) and simulation platforms are able to provide realistic synthetic data that can be used to train machine learning models.
Machine learning researchers are constantly exploring new algorithms. In the case of deep learning, this usually means trying new neural network architectures, refining parameters, or exploring new optimisation techniques. The challenge is that experiments can take a long time to complete. The cost of computation means researchers cannot casually run such long and complex experiments, even if they have the time.
Our industry is well aware of these issues. That’s why hardware companies, including our partner Intel, continue to release suites of hardware products for AI (including compute, memory, host bandwidth, and I/O bandwidth). The demand is so great that other companies are beginning to jump into the fray.
Many new companies are working to develop specialised hardware, including data centre-specific machines, where the task of training large models using large data sets usually takes place. We are also entering an age where billions of edge devices will be expected to perform inference tasks, like image recognition. Hardware for these edge devices needs to be energy efficient and reasonably priced.
Taking a cautious approach
We’ve talked about data, models, and compute mainly in the context of traditional performance measures: namely, optimising machine learning or even business metrics. The reality is that there are many other considerations. For example, in certain domains (including health and finance) systems need to be easily-explainable. Other aspects including fairness, privacy and security, and reliability and safety are also all-important considerations as ML and AI get deployed more widely. Our research shows that this is a real concern for companies.
When considering their machine learning and AI initiatives, organisations also need to consider reliability and safety. While we can start building computer vision applications today, we need to remember that they can be brittle. In certain domains, we will need to understand safety implications and we will need to prioritise reliability over efficiency gains provided by automation. It’s understandable why a business would want to be first to market, but if an application isn’t ready for use in the real-world, it threatens potentially damaging consequences.
Caution shouldn’t compromise ambition, though. What’s important is to take a structured, staged and strategic approach to developing safe, explainable, fair, and secure AI applications. That’s the best way to move ML and AI from the lab, and out and into the world.
About the Author
Ben Lorica is the Chief Data Scientist at O’Reilly Media, Inc. and is the Program Director of both the Strata Data Conference and the Artificial Intelligence Conference. He has applied Business Intelligence, Data Mining, Machine Learning and Statistical Analysis in a variety of settings including Direct Marketing, Consumer and Market Research, Targeted Advertising, Text Mining, and Financial Engineering. His background includes stints with an investment management company, internet startups, and financial services.