Artificial intelligence (AI) is making its presence felt in every aspect of our daily lives—from the new breed of virtual assistants in our homes, to the spam filters that eliminate unwanted spam emails from our inboxes
As AI algorithms—and the computing power that drives them—improve year-on-year, their ability to positively transform the world in which we live is unquestionable. In fact, PwC predicts that AI could contribute up to $15.7 trillion to the global economy by 2030.
Indeed, as many as one-in-five (20 percent) of the 1,000 US organisations recently surveyed by PwC had plans to implement AI enterprise-wide in 2019. The PwC research also reveals how companies are increasingly initiating AI models at the very core of their production processes, in a bid to enhance operational decision-making and provide forward-looking intelligence to people in every function throughout the business.
To many, this move to AI is no surprise. After all, robots have been used for years in many manufacturing disciplines, so the progression to AI seems like a logical next step. Either way, there is no doubt that the future is one in which machines and humans will work alongside one another with increasing regularity.
AI is big business—US venture capital investment in the sector reached $6.6 billion in the first three quarters of 2018, compared to $3.9 billion in the same period the year before. Meanwhile, AI companies have become attractive takeover targets, with the number acquired outright reaching a record 35 companies at a combined value at $3.8 billion.
Despite this positive outlook, some unanswered questions remain. Concerns continue to grow about the impact of AI on privacy, cybersecurity, employment, social inequality, and the environment. Customers, employees, boards, regulators and corporate partners are all asking the same question: can we trust AI?
The trust element
As AI in the marketplace increasingly becomes controlled by just a handful of big companies that own cloud-based AI platforms and APIs, the issue of trust is stimulating growing calls for the decentralisation of AI. The main fear for manufacturers is that a centralised model will lead to the monopolisation of the AI market. This in turn could cause unfair pricing and stifle innovation.
Decentralised AI—born at the intersection of blockchain, on-device AI, and the Internet of Things (IoT)—helps solve this challenge and promotes transparency. It also ensures interoperability and encourages innovation among an unlimited number of other AI companies. Ecosystems such as SingularityNET are already fostering wider collaboration among the global decentralised AI community—a case of safety in numbers, if you will. More than that, such marketplaces have been designed to ensure that—in the event of AI reaching mass market usage—contributors and users of the technology will be the ones to control it, rather than a few powerful entities. It is very much a meeting of minds for the common good that has close similarities to what Sir Tim Berners Lee initially wanted the Internet to become, before it took a turn to the dark side.
Promoting interoperability and decentralised AI will ultimately lead to an era of AGI (artificial general intelligence) that will empower manufacturers. For example, by helping them detect anomalies and generate predictions that can be used for enterprise resource planning (ERP) and help improve their processes in the future.
Emerging regulatory issues
Decentralised AI may become even more necessary for another reason. Stringent regulations around data privacy are already impacting AI and may limit its growth, due to the restrictions they place on the movement of cross-border data.
For example, last year’s General Data Protection Regulation (GDPR) in Europe and the imminent California Consumer Privacy Act (CCPA) give individuals the right to see and control how organisations collect and use their personal data. Both regulatory frameworks also impose heavy fines on organisations, should this data become compromised in any way. AI is not just an intelligence problem, it’s also a data problem. A decentralised AI ecosystem would help companies to keep siloed data repositories within geographic borders to ensure compliance, and respond quickly and easily to changes in regulations across territories.
Maintaining growth and a competitive edge
Many businesses are already using AI to improve their operations and enhance the customer experience they deliver. The rate of adoption is set to accelerate in 2019, as business leaders realise the benefits of deploying AI throughout their organisations to create maximum value. This is particularly pertinent, since the power of AI is further amplified when integrated with other technologies, such as analytics, ERP, IoT, and blockchain.
The use of AI has already enabled companies to eliminate many historically repetitive and manual tasks across the supply chain. However, the notion of building centralised AI inhibits a potentially more organic approach that supports the natural processes of variation, competition, adaption, and selection.
Decentralising AI will help address this challenging issue, fostering an environment in which the developer community can build innovative algorithms and solutions that enable manufacturers to grow and maintain—or even gain—a competitive edge.
About the Author
Andy Coussins is senior vice president and head of international at Epicor Software. Epicor Software Corporation provides industry-specific business software designed around the needs of manufacturing, distribution, retail, and services organizations. More than 45 years of experience with our customers’ unique business processes and operational requirements is built into every solution―in the cloud or on premises.