This supplement is written from a contemporary perspective, with a focus on data-intensive technologies and big data. While this focus may be important, it has not yet been proven to solve all problems. A complete and balanced history of this field is beyond the scope of this document.
The history of artificial intelligence (AI) begins in ancient times with myths, stories, and rumors about artificial beings endowed with intelligence and consciousness by master craftspeople. The field of artificial intelligence (AI) was formally born and named in 1956 at a workshop held by John McCarthy as part of Dartmouth's Summer Research Project on Artificial Intelligence. The aim was to explore ways to create machines that simulate aspects of intelligence. This is the basic idea that has driven the field ever since. McCarthy is credited with first using the term "artificial intelligence" in a proposal he co-wrote with Marvin Minsky, Nathaniel Rochester, and Claude Shannon for the workshop. Many of the attendees quickly went on to lead major AI projects, including Arthur Samuel, Oliver Selfridge, Ray Solomonov, Allen Newell, and Herbert Simon.
Although the Dartmouth workshops created a unified identity and a passionate research community for the field, many of the technical ideas that define AI have been around for much longer. In the 18th century, Thomas Bayes provided a framework for thinking about the probability of events. In the 19th century, George Boole showed that logical reasoning, dating back to Aristotle, could be carried out systematically in the same way as solving simultaneous equations. By the early 20th century, advances in experimental science led to the emergence of the field of statistics, which made it possible to draw consistent conclusions from data. The idea of physically designing a machine to execute a series of commands captivated the imagination of pioneers such as Charles Babbage, matured by the 1950s, and led to the construction of the first electronic computer. Primitive robots that could sense and act autonomously were also created during this period.
The most influential ideas underlying computer science come from Alan Turing, who proposed his model of a formal computer. Turing's classic paper ``Computing Machines and Intelligence'' imagined the possibility of creating computers that simulated intelligence, and explored current ideas such as how to test intelligence and how machines can learn automatically. These ideas inspired artificial intelligence, but Turing did not have access to the necessary computing resources to use it.
Various foci emerged in the search for AI from the 1950s to the 1970s. Newell and Simon pioneered the foray into heuristic search, an efficient method for finding solutions in large combinatorial spaces. In particular, they applied this idea to the construction of proofs of mathematical theorems, first with the Logic Theorist program and then with the General Practice Solver. In computer vision, early work on character recognition by Selfridge et al. laid the foundation for more complex applications such as face recognition. Research on natural language processing also began in the late 1960s. The wheeled robot "Shakey", built at SRI International, founded the field of mobile robotics. Samuel's checkers game program was improved through self-play and was one of the first practical instances of a machine learning system. Rosenblatt's perceptron, a computational model based on biological neurons, became the foundation of the field of artificial neural networks. Feigenbaum and others proposed building expert systems, repositories of knowledge tailored to specific domains such as chemistry or medical diagnosis.
Early conceptual advances presupposed the existence of symbolic systems that could be thought of and built upon. However, despite these promising advances in various aspects of artificial intelligence, the field had not yet achieved major practical success in the 1980s. This gap between theory and practice is partially due to the AI community not placing enough emphasis on physical grounding systems that have direct access to environmental signals and data. Furthermore, Boolean (true/false) logic has been overemphasized and the need to quantify uncertainty has been ignored. In the mid-1980s, as interest in AI waned and funding dried up, the industry began to realize these shortcomings. Nilsson calls this the "winter of AI."
Its long-awaited resurgence in the 1990s was based on the idea that good old AI was insufficient as an end-to-end approach to building intelligent systems. Rather, we needed to build an intelligent system from scratch that could solve a task at any time, even at different levels of performance. Advances in technology have also facilitated the development of systems based on real-world data. Inexpensive and reliable hardware for sensing and actuation has made it easy to build robots. Additionally, the Internet's ability to collect large amounts of data, and the availability of computing power and storage to process that data, has enabled statistical methods that essentially derive solutions from the data. As a result of these developments, AI has had a significant impact on our daily lives over the past two decades, as discussed in Section II.
To summarize, as explained in Section II, some of them are currently "hotter" than others for various reasons. This is not to downplay the historical importance of the others, nor is it to say that they cannot become flash points again in the future.
Searching and planning is the consideration of actions toward a goal. For example, in a chess program like Deep Blue, search plays a key role in determining which moves (actions) will ultimately lead to victory (goals).
The domain of knowledge representation and reasoning involves processing (usually large amounts of) information into a structured form that can be questioned more reliably and efficiently. IBM's Watson program, which defeated human competitors in the 2011 Jeopardy contest, was primarily based on an efficient scheme for organizing, indexing, and retrieving large amounts of information from a variety of sources.
Machine learning is a paradigm (typical pattern of something, a pattern or model), in which a system automatically improves task performance by observing relevant data. In fact, machine learning has been used numerous times in the past to power systems from search engines and product recommendation engines to speech recognition, fraud detection, image understanding, and many other activities that previously depended on human judgment. It has contributed significantly to the rise of AI over the past decade. . Automating these tasks has made it possible to expand services such as e-commerce.
As more intelligent systems are built, questions naturally arise about how such systems interact. The field of multi-agent systems deals with this problem, which is becoming increasingly important in online marketplaces and transportation systems.
Since its inception, AI has been involved in designing and building systems embodied in the real world. The field of robotics studies fundamental aspects of emotions and behaviors, especially their integration, that allow robots to act effectively. As robots and other computer systems share the living world with humans, the specialty of human-robot interaction has also become more important in recent decades.
Machine cognition has always played a central role in AI, partly in the development of robotics, but also as a completely separate research field. The most commonly studied perception modalities are computer vision and natural language processing, each supported by a large and vibrant community.
Several other areas of interest within AI today are a result of the growth of the Internet. Social network analysis investigates the influence of neighborhood relationships on the behavior of individuals and communities. Crowdsourcing is another innovative problem-solving technique that leverages human intelligence, typically thousands of people, to solve difficult computational problems.
Dividing AI into subfields has enabled significant technological advances on many fronts, but synthesizing intelligence on a reasonable scale always requires integrating many different ideas. For example, his AlphaGo program, which recently defeated the reigning human champion at Go, used several machine learning algorithms to train the program itself, and also used advanced search methods during the game.