We believe that artificial intelligence (AI) is truly destined to augment and enhance the human experience. AI can bring the best thinking of the best teachers to distant schools; it can automate routine medical tasks, freeing clinicians to focus on complex diagnoses and patient care; it can ensure consistency and rationality in hiring decisions, bringing opportunity to those who might otherwise be overlooked.
To businesses, AI may understandably appear chaotic right now; the development, deployment and management of a virtuous and helpful AI program looks daunting. But while the technology is new and complex, it’s not so different than others that started as limited research projects and are now foundational elements of modern life — electronic computing itself and the World Wide Web.
By examining the maturation of computing and the web, organizations can identify obstacles to AI’s development and devise ways to overcome them. We have identified four phases that describe how technology breakthroughs reach their end goal:
Figure 1 defines each phase and notes actions that encourage them. To learn more about these phases, read our white paper, “The AI Path: Past, Present and Future.” We believe that if AI is developed in the intentional, guided manner we propose, then the technology will assume a foundational and beneficial position — similar to the way that computing and the web play an integrated role in our personal and professional lives today.
While AI as a concept and as a set of computing technologies dates to 1950, when pioneer Alan Turing devised his famous Turing test, it is still in its early days, similar to computing in the late 1970s or the web in the mid-1990s. Examples of AI success exist, but they tend to be somewhat disjointed and opaque, understandable to experts only. But with careful stewardship, that won’t be the case forever. The following lays out how we believe AI will progress moving forward.
As with open architecture for PCs and HTML for the web, open standards for AI systems are an absolute requirement. Such standards will enable interoperable systems that build on the successes of others. For instance, it will be possible to connect a language generation system to a vision system, a translation system, and finally a speech generation system to output the result. Standardization will make it possible to swap modules of a system in and out, such as replacing one speech recognition tool with another.
Another important aspect of standardization is the ability to know when an AI decision or the data feeding the system is trustworthy. That is, what standards define whether AI is performing intelligently? Today, AI lacks markers of transparency and trust; there is no AI equivalent of the U.S. Consumer Credit Protection Act of 1968 (or similar legislation in other countries). In the absence of such standards, we should not halt AI’s development or deployment. Rather, we should seek to establish standards of trust and transparency, working toward a common standard certifying that the behavior of a given AI is fair and unbiased.
Just as the web made computing and information accessible to nonexperts with style sheets and website authoring tools, so AI needs interfaces that make it possible for laypeople to use the technology. One important lesson comes from the browser wars (remember them?) of the late 1990s. Rapid development and innovation characterized the initial phase of web browser development, but then Microsoft gained a dominant position by bundling its Explorer into Windows — in essence eliminating the need to seek alternatives. As a result, innovation slowed until an antitrust lawsuit against Microsoft was resolved, the browser wars reignited, and mobile computing provided new impetus for progress and new channels for user interfaces.
For AI to grow and prosper in a larger market context, open competition and innovation must be ensured. It should not be possible for one player to force adoption of its AI technology simply because that organization dominates a portion of the IT space. Standards help in this regard as well, making it possible for emerging technological approaches to interoperate with existing ones, instead of making various forms of AI incompatible. The result will be open innovation in creating AI that will be useful to the general population.
While most AI development is now in the usability phase, there are some clear successes in consumerization. Just as iPhone and Android smartphones made computing nearly ubiquitous and Web 2.0 made it possible for anyone to contribute to the web, AI consumerization will allow non-specialists to build applications for their specific needs and for general consumption. This will lead to mass production of AI-based systems by the general public as people routinely produce, configure, teach and engage systems for different purposes and domains. Typical examples might be intelligent assistants that manage an individual’s everyday activities, finances and health — but we also envision AI systems that design interiors, gardens and clothing; maintain buildings, appliances and vehicles; and interact with other people and their AIs.
To push toward consumerization, we advocate the creation of AI guidelines, and we note once more that a lack of standards in conveying trust will be a drag on AI’s adoption in business and personal applications. Industry and government should jointly engage consumers in building guidelines that companies can use to signal their compliance with reasonable and customary behavior.
Computing has become so ubiquitous as to be nearly invisible. The web has emerged as humankind’s primary means of interaction. Consequently, the amount of data and the complexity of our digital lives and business interactions are exceeding our ability to process them in a timely, economical way. AI can and, we believe, will fill this gap by bringing the interpretation and experience layer to the foundational status of computing and the web. AI will routinely augment or run business operations and will optimize government policies, transportation, agriculture and healthcare.
All this does not mean that human decision-making will be replaced by machines. Rather, it means human decision-making will be augmented and empowered by machines. AI will not be limited to prediction, but will include prescriptions of what decisions need to be made to achieve given objectives.
This foundationalization implies a deep integration of AI into daily business and personal life. AI will begin prescribing courses of action to achieve a given outcome. To do so, it will factor in historical examples, cognitive reasoning, and real-time and near-real-time assessment of the current environment. For instance, an enterprise may decide to maximize productivity and growth, but at the same time to minimize cost; cut its environmental impact; and promote equal access and diversity. AI can then be directed to balance these apparently conflicting objectives. Decisions on the desired outcome and the weight placed on different components (price vs. quality, for example) must be made directly by business and society leaders.
Given the progression described here, we believe that AI can and will be developed by humans in service of humans. It can eventually power much of society’s infrastructure, but it will only get there through the phases outlined above.
No winning idea ever came to fruition instantly and completely, without social and business support. AI will be no different. The breakthroughs that came with distributed computing and the web share common attributes, and AI is following that path. We recommend an embrace of the technology in personal and business decisions, recognizing that only AI can bring the computing power and insight needed to the nearly limitless opportunities available.