Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to create computers and computer software that are capable of intelligent behaviour.
- What is Artificial Intelligence?
- Types of Artificial Intelligence
- Approaches to AI
- Timeline of Artificial Intelligence
- Glossary of Artificial Intelligence
- The Most Significant Failures When Al Turned Rogue, Causing Disastrous Results
- Applications of Artificial Intelligence
- Artificial Intelligence Debate
- Artificial Intelligence in Science Fiction
- List of Important Publications in AI
- Philosophy Of Artificial Intelligence
- Artificial Intelligence Researchers and Scholars
- Perspectives On AI
- Top 10 Most Popular AI Models
- Competitions and awards
- AI projects
- AI and the Future
- Psychology and AI
- Open-source AI Development Tools
- A form of intelligence
- Synthetic intelligence – intelligence of a man-made yet real quality: actual, not fake, not simulated
- A type of technology
- A type of computer technology
- A computer system that performs some intellectual function
- An emerging technology
- A type of computer technology
- A field:
- An academic discipline
- A branch of science
- A branch of applied science
- A branch of computer science
- A branch of applied science
- Weak AI (narrow AI) – non-sentient machine intelligence, typically focused on a narrow task (narrow AI).
- Strong AI / artificial general intelligence (AGI) – (hypothetical) machine with the ability to apply intelligence to any problem, rather than just one specific problem, typically meaning "at least as smart as a typical human". Its future potential creation is referred to as a technological singularity, and constitutes a global catastrophic risk.
- Superintelligence – (hypothetical) artificial intelligence far surpassing that of the brightest and most gifted human minds. Due to recursive self-improvement, superintelligence is expected to be a rapid outcome of creating artificial general intelligence.
- Symbolic AI – When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation.
- Interpretational AI - Inputs such as words are used to identify the abstract representations of the things (like, but not limited to: objects, characteristics and actions) that words are intending to describe within natural language.
- Sub-symbolic
- Statistical AI
Antiquity | Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora). |
---|---|
Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion. | |
10th century BC | Yan Shi presented King Mu of Zhou with mechanical men. |
384 BC–322 BC | Aristotle described the syllogism, a method of formal, mechanical thought and theory of knowledge in The Organon. |
1st century | Heron of Alexandria created mechanical men and other automatons. |
260 | Porphyry of Tyros wrote Isagogê which categorized knowledge and logic. |
~800 | Geber developed the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life. |
1206 | Al-Jazari created a programmable orchestra of mechanical human beings. |
1275 | Ramon Llull, Spanish theologian, invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century. |
~1500 | Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy. |
~1580 | Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life. |
Early 17th century | René Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance"). |
1620 | Sir Francis Bacon developed empirical theory of knowledge and introduced inductive logic in his work The New Organon, a play on Aristotle's title The Organon. |
1623 | Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito Burattini, Samuel Morland and René Grillet). |
1641 | Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning". |
1642 | Blaise Pascal invented the mechanical calculator, the first digital calculating machine. |
1672 | Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary numeral system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems. |
1726 | Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study." The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism. |
1750 | Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical. |
1769 | Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk. The Turk was later shown to be a hoax, involving a human chess player. |
1818 | Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings. |
1822–1859 | Charles Babbage and Ada Lovelace worked on programmable mechanical calculating machines. |
1837 | The mathematician Bernard Bolzano made the first modern attempt to formalize semantics. |
1854 | George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra. |
1863 | Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity. |
1913 | Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. |
---|---|
1915 | Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista, and published speculation about thinking and automata. |
1923 | Karel Čapek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English. |
1920s and 1930s | Ludwig Wittgenstein and Rudolf Carnap led philosophy into logical analysis of knowledge. Alonzo Church developde Lambda Calculus to investigate computability using recursive functional notation. |
1931 | Kurt Gödel showed that sufficiently powerful formal systems, if consistent, permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science". |
1940 | Edward Condon displays Nimatron, a digital computer that played Nim perfectly. |
1941 | Konrad Zuse built the first working program-controlled computers. |
1943 | Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks. |
Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948. | |
1945 | Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern. |
Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities. | |
1948 | John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer. |
1950 | Alan Turing proposes the Turing Test as a measure of machine intelligence. |
---|---|
Claude Shannon published a detailed analysis of chess playing as search. | |
Isaac Asimov published his Three Laws of Robotics. | |
1951 | The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. |
1952–1962 | Arthur Samuel (IBM) wrote the first game-playing program, for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play. |
1956 | The Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon. McCarthy coins the term artificial intelligence for the conference. |
The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now Carnegie Mellon University or CMU). This is often called the first AI program, though Samuel's checkers program also has a strong claim. | |
1958 | John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language. |
Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases. | |
Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence. | |
1959 | The General Problem Solver (GPS) was created by Newell, Shaw and Simon while at CMU. |
John McCarthy and Marvin Minsky founded the MIT AI Lab. | |
Late 1950s, early 1960s | Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation. |
1960s | Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. |
---|---|
1960 | Man-Computer Symbiosis by J.C.R. Licklider. |
1961 | James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level. |
In Minds, Machines and Gödel, John Lucas denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior. | |
Unimation's industrial robot Unimate worked on a General Motors automobile assembly line. | |
1963 | Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. |
Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence. | |
Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt. | |
1964 | Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. |
Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. | |
1965 | Lotfi Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic "Fuzzy Sets" (Information and Control 8: 338–353). |
J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. | |
Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed. | |
Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. | |
1966 | Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. |
Machine Intelligence workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others. | |
Negative report on machine translation kills much work in Natural language processing (NLP) for many years. | |
Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. | |
1968 | Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. |
Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play. | |
Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor. | |
1969 | Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving. |
Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. | |
Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. | |
First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. | |
Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure, and This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless, significant progress in the field continued (see below). | |
McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence". |
Early 1970s | Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI. |
---|---|
1970 | Seppo Linnainmaa publishes the reverse mode of automatic differentiation. This method became later known as backpropagation, and is heavily used to train artificial neural networks. |
Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. | |
Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. | |
Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks. | |
1971 | Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English. |
Work on the Boyer-Moore theorem prover started in Edinburgh. | |
1972 | Prolog programming language developed by Alain Colmerauer. |
Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS. | |
1973 | The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.) |
The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities. | |
1974 | Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems. |
1975 | Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems. |
Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan. | |
Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. | |
The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal. | |
Mid-1970s | Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing. |
David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception. | |
1976 | Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures). |
Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. | |
1978 | Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program. |
Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". | |
The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments. | |
1979 | Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". |
Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. | |
Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. | |
The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. | |
BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion (in part via luck). | |
Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance. | |
Late 1970s | Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. |
1980s | Lisp machines developed and marketed. First expert system shells and commercial applications. |
---|---|
1980 | First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford. |
1981 | Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation) |
1982 | The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. |
1983 | John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). |
James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. | |
Mid-1980s | Neural Networks become widely used with the Backpropagation algorithm, also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970 and applied to neural networks by Paul Werbos. |
1985 | The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). |
1986 | The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets. |
Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research. | |
1987 | Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983). |
Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI. | |
Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models. | |
1989 | The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural network (ANN) technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail. |
Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network). |
1990s | Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. |
---|---|
Early 1990s | TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. |
1991 | DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research. |
1992 | Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robot Telepresence ROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California. |
1993 | Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). |
Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years. | |
ISX corporation wins "DARPA contractor of the year" for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s. | |
1994 | Lotfi Zadeh at U.C. Berkeley creates "soft computing" and builds a world network of research with a fusion of neural science and neural net systems, fuzzy set theory and fuzzy systems, evolutionary algorithms, genetic programming, and chaos theory and chaotic systems ("Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, March 1994, Vol. 37 No. 3, pages 77-84). |
With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars. | |
English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever. | |
Cindy Mason at NASA organizes the First AAAI Workshop on AI and the Environment. | |
1995 | Cindy Mason at NASA organizes the First International IJCAI Workshop on AI and the Environment. |
"No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501 km) of the 2,849 miles (4,585 km). Throttle and brakes were controlled by a human driver. | |
One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. | |
1997 | The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov. |
First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. | |
Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 6–0. | |
1998 | Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment. |
Tim Berners-Lee published his Semantic Web Road map paper. | |
Ulises Cortés and Miquel Sànchez-Marrè organize the first Environment and AI Workshop in Europe ECAI, "Binding Environmental Sciences and Artificial Intelligence." | |
Leslie P. Kaelbling, Michael Littman, and Anthony Cassandra introduce POMDPs and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics and automated planning and scheduling | |
1999 | Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous. |
Late 1990s | Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. |
Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. | |
Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network. |
2000 | Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. |
---|---|
Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. | |
The Nomad robot explores remote regions of Antarctica looking for meteorite samples. | |
2002 | iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles. |
2004 | OWL Web Ontology Language W3C Recommendation (10 February 2004). |
DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money. | |
NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars. | |
2005 | Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings. |
Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions. | |
Blue Brain is born, a project to simulate the brain at molecular detail. | |
2006 | The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (14–16 July 2006) |
2007 | Philosophical Transactions of the Royal Society, B – Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection |
Checkers is solved by a team of researchers at the University of Alberta. | |
DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment. | |
2008 | Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion". |
2009 | Google builds autonomous car. |
2010 | Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge. |
---|---|
2011 | Mary Lou Maher and Doug Fisher organize the First AAAI Workshop on AI and Sustainability. |
IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings. | |
2011–2014 | Apple's Siri (2011), Google's Google Now (2012) and Microsoft's Cortana (2014) are smartphone apps that use natural language to answer questions, make recommendations and perform actions. |
2013 | Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPA’s Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in 8 tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose. |
NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images. | |
2015 | An open letter to ban development and use of autonomous weapons signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics. |
Google DeepMind's AlphaGo (version: Fan) defeated 3 time European Go champion 2 dan professional Fan Hui by 5 games to 0. | |
2016 | Google DeepMind's AlphaGo (version: Lee) defeated Lee Sedol 4–1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016. Before the match with AlphaGo, Lee Sedol was confident in predicting an easy 5–0 or 4–1 victory. |
2017 | Asilomar Conference on Beneficial AI was held, to discuss AI ethics and how to bring about beneficial AI while avoiding the existential risk from artificial general intelligence. |
Deepstack is the first published algorithm to beat human players in imperfect information games, as shown with statistical significance on heads-up no-limit poker. Soon after, the poker AI Libratus by different research group individually defeated each of its 4 human opponents—among the best players in the world—at an exceptionally high aggregated winrate, over a statistically significant sample. In contrast to Chess and Go, Poker is an imperfect information game. | |
Google DeepMind's AlphaGo (version: Master) won 60–0 rounds on two public Go websites including 3 wins against world Go champion Ke Jie. | |
A propositional logic boolean satisfiability problem (SAT) solver proves a long-standing mathematical conjecture on Pythagorean triples over the set of integers. The initial proof, 200TB long, was checked by two independent certified automatic proof checkers. | |
An OpenAI-machined learned bot played at The International 2017 Dota 2 tournament in August 2017. It won during a 1v1 demonstration game against professional Dota 2 player Dendi. | |
Google DeepMind revealed that AlphaGo Zero—an improved version of AlphaGo—displayed significant performance gains while using far fewer tensor processing units (as compared to AlphaGo Lee; it used same amount of TPU's as AlphaGo Master). Unlike previous versions, which learned the game by observing millions of human moves, AlphaGo Zero learned by playing only against itself. The system then defeated AlphaGo Lee 100 games to zero, and defeated AlphaGo Master 89 to 11. Although unsupervised learning is a step forward, much has yet to be learned about general intelligence. AlphaZero masters chess in 4 hours, defeating the best chess engine, StockFish 8. AlphaZero won 28 out of 100 games, and the remaining 72 games ended in a draw. | |
2018 | Alibaba language processing AI outscores top humans at a Stanford University reading and comprehension test, scoring 82.44 against 82.304 on a set of 100,000 questions. |
The European Lab for Learning and Intelligent Systems (aka Ellis) proposed as a pan-European competitor to American AI efforts, with the aim of staving off a brain drain of talent, along the lines of CERN after World War II. | |
Announcement of Google Duplex, a service to allow an AI assistant to book appointments over the phone. The LA Times judges the AI's voice to be a "nearly flawless" imitation of human-sounding speech. |
- Abductive logic programming – Abductive logic programming (ALP) is a high-level knowledge-representation framework that can be used to solve problems declaratively based on abductive reasoning. It extends normal logic programming by allowing some predicates to be incompletely defined, declared as abducible predicates.
- Abductive reasoning – (also called abduction, abductive inference, or retroduction) is a form of logical inference which starts with an observation or set of observations then seeks to find the simplest and most likely explanation. This process, unlike deductive reasoning, yields a plausible conclusion but does not positively verify it.
- Abstract data type – is a mathematical model for data types, where a data type is defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations.
- Abstraction – is the process of removing physical, spatial, or temporal details or attributes in the study of objects or systems in order to more closely attend to other details of interest
- Accelerating change – is a perceived increase in the rate of technological change throughout history, which may suggest faster and more profound change in the future and may or may not be accompanied by equally profound social and cultural change.
- Action language – is a language for specifying state transition systems, and is commonly used to create formal models of the effects of actions on the world. Action languages are commonly used in the artificial intelligence and robotics domains, where they describe how actions affect the states of systems over time, and may be used for automated planning.
- Action model learning – is an area of machine learning concerned with creation and modification of software agent's knowledge about effects and preconditions of the actions that can be executed within its environment. This knowledge is usually represented in logic-based action description language and used as the input for automated planners.
- Action selection – is a way of characterizing the most basic problem of intelligent systems: what to do next. In artificial intelligence and computational cognitive science, "the action selection problem" is typically associated with intelligent agents and animats—artificial systems that exhibit complex behaviour in an agent environment.
- Activation function – In artificial neural networks, the activation function of a node defines the output of that node given an input or set of inputs.
- Adaptive algorithm – an algorithm that changes its behavior at the time it is run, based on a priori defined reward mechanism or criterion.
- Adaptive neuro fuzzy inference system – or adaptive network-based fuzzy inference system (ANFIS) is a kind of artificial neural network that is based on Takagi–Sugeno fuzzy inference system. The technique was developed in the early 1990s. Since it integrates both neural networks and fuzzy logic principles, it has potential to capture the benefits of both in a single framework. Its inference system corresponds to a set of fuzzy IF–THEN rules that have learning capability to approximate nonlinear functions. Hence, ANFIS is considered to be a universal estimator. For using the ANFIS in a more efficient and optimal way, one can use the best parameters obtained by genetic algorithm.
- Admissible heuristic – In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path.
- Affective computing – (sometimes called artificial emotional intelligence, or emotion AI) is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science.
- Agent architecture – in computer science is a blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures.
- AI accelerator – is a class of microprocessor or computer system designed as hardware acceleration for artificial intelligence applications, especially artificial neural networks, machine vision and machine learning.
- AI-complete – In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.
- Algorithm – is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing and automated reasoning tasks.
- Algorithmic efficiency – is a property of an algorithm which relates to the number of computational resources used by the algorithm. An algorithm must be analyzed to determine its resource usage, and the efficiency of an algorithm can be measured based on usage of different resources. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process.
- Algorithmic probability – In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s.
- AlphaGo – is a computer program that plays the board game Go. It was developed by Alphabet Inc.'s Google DeepMind in London. AlphaGo has several versions including AlphaGo Zero, AlphaGo Master, AlphaGo Lee, etc. In October 2015, AlphaGo became the first computer Go program to beat a human professional Go player without handicaps on a full-sized 19×19 board.
- Ambient intelligence – (AmI) refers to electronic environments that are sensitive and responsive to the presence of people.
- Analysis of algorithms – is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity).
- Analytics – the discovery, interpretation, and communication of meaningful patterns in data.
- Answer set programming – (ASP) is a form of declarative programming oriented towards difficult (primarily NP-hard) search problems. It is based on the stable model (answer set) semantics of logic programming. In ASP, search problems are reduced to computing stable models, and answer set solvers—programs for generating stable models—are used to perform search.
- Anytime algorithm – an algorithm that can return a valid solution to a problem even if it is interrupted before it ends.
- Application programming interface – (API) is a set of subroutine definitions, communication protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication among various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. An API may be for a web-based system, operating system, database system, computer hardware, or software library.
- Approximate string matching – (often colloquially referred to as fuzzy string searching) is the technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately.
- Approximation error – The approximation error in some data is the discrepancy between an exact value and some approximation to it.
- Argumentation framework – or argumentation system, is a way to deal with contentious information and draw conclusions from it. In an abstract argumentation framework, entry-level information is a set of abstract arguments that, for instance, represent data or a proposition. Conflicts between arguments are represented by a binary relation on the set of arguments. In concrete terms, you represent an argumentation framework with a directed graph such that the nodes are the arguments, and the arrows represent the attack relation. There exist some extensions of the Dung's framework, like the logic-based argumentation frameworks or the value-based argumentation frameworks.
- Artificial immune system – Artificial immune systems (AIS) are a class of computationally intelligent, rule-based machine learning systems inspired by the principles and processes of the vertebrate immune system. The algorithms are typically modeled after the immune system's characteristics of learning and memory for use in problem-solving.
- Artificial intelligence – (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".
- Artificial Intelligence Markup Language – is an XML dialect for creating natural language software agents.
- Artificial neural network – (ANN) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains.
- Association for the Advancement of Artificial Intelligence – (AAAI) is an international, nonprofit, scientific society devoted to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence (AI), improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.
- Asymptotic computational complexity – In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.
- Attributional calculus – is a logic and representation system defined by Ryszard S. Michalski. It combines elements of predicate logic, propositional calculus, and multi-valued logic. Attributional calculus provides a formal language for natural induction, an inductive learning process whose results are in forms natural to people.
- Augmented reality – (AR) is an interactive experience of a real-world environment where the objects that reside in the real-world are "augmented" by computer-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.
- Automata theory – is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science and discrete mathematics (a subject of study in both mathematics and computer science).
- Automated planning and scheduling – sometimes denoted as simply AI Planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
- Automated reasoning – is an area of computer science and mathematical logic dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science, and even philosophy.
- Autonomic computing – (also known as AC) refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users. Initiated by IBM in 2001, this initiative ultimately aimed to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth.
- Autonomous car – A self-driving car, also known as a robot car, autonomous car, auto, or driverless car, is a vehicle that is capable of sensing its environment and moving with little or no human input.
- Autonomous robot – is a robot that performs behaviors or tasks with a high degree of autonomy. Autonomous robotics is usually considered to be a subfield of artificial intelligence, robotics, and information engineering.
- Backpropagation – is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. Backpropagation is shorthand for "the backward propagation of errors," since an error is computed at the output and distributed backwards throughout the network's layers. It is commonly used to train deep neural networks, a term referring to neural networks with more than one hidden layer.
- Backpropagation through time – (BPTT) is a gradient-based technique for training certain types of recurrent neural networks. It can be used to train Elman networks. The algorithm was independently derived by numerous researchers.
- Backward chaining – (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.
- Bag-of-words model – is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier.
- Bag-of-words model in computer vision – In computer vision, the bag-of-words model (BoW model) can be applied to image classification, by treating image features as words. In document classification, a bag of words is a sparse vector of occurrence counts of words; that is, a sparse histogram over the vocabulary. In computer vision, a bag of visual words is a vector of occurrence counts of a vocabulary of local image features.
- Batch normalization – is a technique for improving the performance and stability of artificial neural networks. It is a technique to provide any layer in a neural network with inputs that are zero mean/unit variance. Batch normalization was introduced in a 2015 paper. It is used to normalize the input layer by adjusting and scaling the activations.
- Bayesian programming – is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.
- Bees algorithm – is a population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bees algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bees algorithm have been proven in a number of studies.
- Behavior informatics – (BI) is the informatics of behaviors so as to obtain behavior intelligence and behavior insights.
- Behavior tree – A Behavior Tree (BT) is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures.
- Belief-desire-intention software model – (BDI), is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.
- Bias–variance tradeoff – In statistics and machine learning, the bias–variance tradeoff is the property of a set of predictive models whereby models with a lower bias in parameter estimation have a higher variance of the parameter estimates across samples, and vice versa.
- Big data – is a term used to refer to data sets that are too large or complex for traditional data-processing application software to adequately deal with. Data with many cases (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate.
- Big O notation – is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.
- Binary tree – is a tree data structure in which each node has at most two children, which are referred to as the left child and the right child. A recursive definition using just set theory notions is that a (non-empty) binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set. Some authors allow the binary tree to be the empty set as well.
- Blackboard system – is an artificial intelligence approach based on the blackboard architectural model, where a common knowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem.
- Boltzmann machine – (also called stochastic Hopfield network with hidden units) is a type of stochastic recurrent neural network and Markov random field. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield networks.
- Boolean satisfiability problem – (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY or SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
- Brain technology – or self-learning know-how systems, defines a technology that employs latest findings in neuroscience. The term was first introduced by the Artificial Intelligence Laboratory in Zurich, Switzerland, in the context of the ROBOY project. Brain Technology can be employed in robots, know-how management systems and any other application with self-learning capabilities. In particular, Brain Technology applications allow the visualization of the underlying learning architecture often coined as "know-how maps".
- Branching factor – In computing, tree data structures, and game theory, the branching factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated.
- Brute-force search – or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
- Capsule neural network – A Capsule Neural Network (CapsNet) is a machine learning system that is a type of artificial neural network (ANN) that can be used to better model hierarchical relationships. The approach is an attempt to more closely mimic biological neural organization.
- Case-based reasoning – (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems.
- Chatbot – (also known as a smartbots, talkbot, chatterbot, Bot, IM bot, interactive agent, Conversational interface or Artificial Conversational Entity) is a computer program or an artificial intelligence which conducts a conversation via auditory or textual methods.
- Cloud robotics – is a field of robotics that attempts to invoke cloud technologies such as cloud computing, cloud storage, and other Internet technologies centred on the benefits of converged infrastructure and shared services for robotics. When connected to the cloud, robots can benefit from the powerful computation, storage, and communication resources of modern data center in the cloud, which can process and share information from various robots or agent (other machines, smart objects, humans, etc.). Humans can also delegate tasks to robots remotely through networks. Cloud computing technologies enable robot systems to be endowed with powerful capability whilst reducing costs through cloud technologies. Thus, it is possible to build lightweight, low cost, smarter robots have intelligent "brain" in the cloud. The "brain" consists of data center, knowledge base, task planners, deep learning, information processing, environment models, communication support, etc.
- Cluster analysis – or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, bioinformatics, data compression, and computer graphics.
- Cobweb – is an incremental system for hierarchical conceptual clustering. COBWEB was invented by Professor Douglas H. Fisher, currently at Vanderbilt University. COBWEB incrementally organizes observations into a classification tree. Each node in a classification tree represents a class (concept) and is labeled by a probabilistic concept that summarizes the attribute-value distributions of objects classified under the node. This classification tree can be used to predict missing attributes or the class of a new object.
- Cognitive architecture – The Institute of Creative Technologies defines cognitive architecture as: "hypothesis about the fixed structures that provide a mind, whether in natural or artificial systems, and how they work together – in conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a diversity of complex environments."
- Cognitive computing – In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus.
- Cognitive science – is the interdisciplinary, scientific study of the mind and its processes.
- Combinatorial optimization – In Operations Research, applied mathematics and theoretical computer science, combinatorial optimization is a topic that consists of finding an optimal object from a finite set of objects.
- Committee machine – is a type of artificial neural network using a divide and conquer strategy in which the responses of multiple neural networks (experts) are combined into a single response.[92] The combined response of the committee machine is supposed to be superior to those of its constituent experts. Compare with ensembles of classifiers.
- Commonsense knowledge – In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", that all humans are expected to know. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.
- Commonsense reasoning – is one of the branches of artificial intelligence that is concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day.
- Computational chemistry – is a branch of chemistry that uses computer simulation to assist in solving chemical problems.
- Computational complexity theory – focuses on classifying computational problems according to their inherent difficulty, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
- Computational creativity – (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavour that includes the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
- Computational cybernetics – is the integration of cybernetics and computational intelligence techniques.
- Computational humor – is a branch of computational linguistics and artificial intelligence which uses computers in humor research.
- Computational intelligence – (CI), usually refers to the ability of a computer to learn a specific task from data or experimental observation.
- Computational learning theory – In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.
- Computational linguistics – is an interdisciplinary field concerned with the statistical or rule-based modeling of natural language from a computational perspective, as well as the study of appropriate computational approaches to linguistic questions.
- Computational mathematics – the mathematical research in areas of science where computing plays an essential role.
- Computational neuroscience – (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
- Computational number theory – also known as algorithmic number theory, it is the study of algorithms for performing number theoretic computations.
- Computational problem – In theoretical computer science, a computational problem is a mathematical object representing a collection of questions that computers might be able to solve.
- Computational statistics – or statistical computing, is the interface between statistics and computer science.
- Computer-automated design – Design Automation usually refers to electronic design automation, or Design Automation which is a Product Configurator. Extending Computer-Aided Design (CAD), automated design and Computer-Automated Design (CAutoD) are more concerned with a broader range of applications, such as automotive engineering, civil engineering, composite material design, control engineering, dynamic system identification and optimization, financial systems, industrial equipment, mechatronic systems, steel construction, structural optimisation, and the invention of novel systems. More recently, traditional CAD simulation is seen to be transformed to CAutoD by biologically-inspired machine learning, including heuristic search techniques such as evolutionary computation, and swarm intelligence algorithms.
- Computer science – is the theory, experimentation, and engineering that form the basis for the design and use of computers. It involves the study of algorithms that process, store, and communicate digital information. A computer scientist specializes in the theory of computation and the design of computational systems.
- Computer vision – is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.
- Concept drift – In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.
- Connectionism – is an approach in the fields of cognitive science, that hopes to explain mental phenomena using artificial neural networks (ANN).
- Consistent heuristic – In the study of path-finding problems in artificial intelligence, a heuristic function is said to be consistent, or monotone, if its estimate is always less than or equal to the estimated distance from any neighboring vertex to the goal, plus the cost of reaching that neighbor.
- Constrained conditional model – (CCM), is a machine learning and inference framework that augments the learning of conditional (probabilistic or discriminative) models with declarative constraints.
- Constraint logic programming – is a form of constraint programming, in which logic programming is extended to include concepts from constraint satisfaction. A constraint logic program is a logic program that contains constraints in the body of clauses. An example of a clause including a constraint is
A(X,Y) :- X+Y>0, B(X), C(Y)
. In this clause,X+Y>0
is a constraint;A(X,Y)
,B(X)
, andC(Y)
are literals as in regular logic programming. This clause states one condition under which the statementA(X,Y)
holds:X+Y
is greater than zero and bothB(X)
andC(Y)
are true. - Constraint programming – is a programming paradigm wherein relations between variables are stated in the form of constraints. Constraints differ from the common primitives of imperative programming languages in that they do not specify a step or sequence of steps to execute, but rather the properties of a solution to be found.
- Constructed language – (sometimes called a conlang) is a language whose phonology, grammar, and vocabulary are, instead of having developed naturally, consciously devised. Constructed languages may also be referred to as artificial, planned or invented languages.
- Control theory – in control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability.
- Convolutional neural network – In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.
- Crossover – In genetic algorithms and evolutionary computation, crossover, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and analogous to the crossover that happens during sexual reproduction in biology. Solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions are typically mutated before being added to the population.
- Darkforest – is a computer go program developed by Facebook, based on deep learning techniques using a convolutional neural network. Its updated version Darkfores2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkfmcts3.
- Dartmouth workshop – The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many (though not all) to be the seminal event for artificial intelligence as a field.
- Data fusion – is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source.
- Data integration – involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved.
- Data mining – is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
- Data science – is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from data in various forms, both structured and unstructured, similar to data mining. Data science is a "concept to unify statistics, data analysis, machine learning and their related methods" in order to "understand and analyze actual phenomena" with data. It employs techniques and theories drawn from many fields within the context of mathematics, statistics, information science, and computer science.
- Data set – (or dataset) is a collection of data. Most commonly a data set corresponds to the contents of a single database table, or a single statistical data matrix, where every column of the table represents a particular variable, and each row corresponds to a given member of the data set in question. The data set lists values for each of the variables, such as height and weight of an object, for each member of the data set. Each value is known as a datum. The data set may comprise data for one or more members, corresponding to the number of rows.
- Data warehouse – (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis. DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data in one single place.
- Datalog – is a declarative logic programming language that syntactically is a subset of Prolog. It is often used as a query language for deductive databases. In recent years, Datalog has found new application in data integration, information extraction, networking, program analysis, security, and cloud computing.
- Decision boundary – In the case of backpropagation based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has. If it has no hidden layers, then it can only learn linear problems. If it has one hidden layer, then it can learn any continuous function on compact subsets of Rn as shown by the Universal approximation theorem, thus it can have an arbitrary decision boundary.
- Decision support system – (DSS), is an information system that supports business or organizational decision-making activities. DSSs serve the management, operations and planning levels of an organization (usually mid and higher management) and help people make decisions about problems that may be rapidly changing and not easily specified in advance—i.e. unstructured and semi-structured decision problems. Decision support systems can be either fully computerized or human-powered, or a combination of both.
- Decision theory – (or the theory of choice) is the study of the reasoning underlying an agent's choices. Decision theory can be broken into two branches: normative decision theory, which gives advice on how to make the best decisions given a set of uncertain beliefs and a set of values, and descriptive decision theory which analyzes how existing, possibly irrational agents actually make decisions.
- Decision tree learning – uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining and machine learning.
- Declarative programming – is a programming paradigm—a style of building the structure and elements of computer programs—that expresses the logic of a computation without describing its control flow.
- Deductive classifier – is a type of artificial intelligence inference engine. It takes as input a set of declarations in a frame language about a domain such as medical research or molecular biology. For example, the names of classes, sub-classes, properties, and restrictions on allowable values.
- Deep Blue – was a chess-playing computer developed by IBM. It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.
- Deep learning – (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.
- DeepMind – DeepMind Technologies is a British artificial intelligence company founded in September 2010, currently owned by Alphabet Inc. The company is based in London, with research centres in Canada, France, and the United States. Acquired by Google in 2014, the company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural Turing machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain. The company made headlines in 2016 after its AlphaGo program beat a human professional Go player Lee Sedol, the world champion, in a five-game match, which was the subject of a documentary film. A more general program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few days of play against itself using reinforcement learning.
- Default logic – is a non-monotonic logic proposed by Raymond Reiter to formalize reasoning with default assumptions.
- Description logic – Description logics (DL) are a family of formal knowledge representation languages. Many DLs are more expressive than propositional logic but less expressive than first-order logic. In contrast to the latter, the core reasoning problems for DLs are (usually) decidable, and efficient decision procedures have been designed and implemented for these problems. There are general, spatial, temporal, spatiotemporal, and fuzzy descriptions logics, and each description logic features a different balance between DL expressivity and reasoning complexity by supporting different sets of mathematical constructors.
- Developmental robotics – (DevRob), sometimes called epigenetic robotics, is a scientific field which aims at studying the developmental mechanisms, architectures and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines.
- Diagnosis – is concerned with the development of algorithms and techniques that are able to determine whether the behaviour of a system is correct. If the system is not functioning correctly, the algorithm should be able to determine, as accurately as possible, which part of the system is failing, and which kind of fault it is facing. The computation is based on observations, which provide information on the current behaviour.
- Dialogue system – or conversational agent (CA), is a computer system intended to converse with a human with a coherent structure. Dialogue systems have employed text, speech, graphics, haptics, gestures, and other modes for communication on both the input and output channel.
- Dimensionality reduction – or dimension reduction, is the process of reducing the number of random variables under consideration by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.
- Discrete system – is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals.
- Distributed artificial intelligence – (DAI), also called Decentralized Artificial Intelligence, is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems.
- Dynamic epistemic logic – (DEL), is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur.
- Eager learning – is a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system.
- Ebert test – gauges whether a computer-based synthesized voice can tell a joke with sufficient skill to cause people to laugh. It was proposed by film critic Roger Ebert at the 2011 TED conference as a challenge to software developers to have a computerized voice master the inflections, delivery, timing, and intonations of a speaking human. The test is similar to the Turing test proposed by Alan Turing in 1950 as a way to gauge a computer's ability to exhibit intelligent behavior by generating performance indistinguishable from a human being.
- Echo state network – The echo state network (ESN), is a recurrent neural network with a sparsely connected hidden layer (with typically 1% connectivity). The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can (re)produce specific temporal patterns. The main interest of this network is that although its behaviour is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.
- Embodied agent – also sometimes referred to as an interface agent, is an intelligent agent that interacts with the environment through a physical body within that environment. Agents that are represented graphically with a body, for example a human or a cartoon animal, are also called embodied agents, although they have only virtual, not physical, embodiment.
- Embodied cognitive science – is an interdisciplinary field of research, the aim of which is to explain the mechanisms underlying intelligent behavior. It comprises three main methodologies: 1) the modeling of psychological and biological systems in a holistic manner that considers the mind and body as a single entity, 2) the formation of a common set of general principles of intelligent behavior, and 3) the experimental use of robotic agents in controlled environments.
- Error-driven learning – is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to minimize some error feedback. It is a type of reinforcement learning.
- Ensemble averaging – In machine learning, particularly in the creation of artificial neural networks, ensemble averaging is the process of creating multiple models and combining them to produce a desired output, as opposed to creating just one model.
- Ethics of artificial intelligence – is the part of the ethics of technology specific to artificial intelligence.
- Evolutionary algorithm – (EA), is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.
- Evolutionary computation – is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
- Evolving classification function – (ECF), evolving classifier functions or evolving classifiers are used for classifying and clustering in the field of machine learning and artificial intelligence, typically employed for data stream mining tasks in dynamic and changing environments.
- Existential risk – is the hypothesis that substantial progress in artificial general intelligence (AGI) could someday result in human extinction or some other unrecoverable global catastrophe.
- Expert system – is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code.
- Fast-and-frugal trees – a type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category.
- Feature extraction – In machine learning, pattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations.
- Feature learning – In machine learning, feature learning or representation learning is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
- Feature selection – In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.
- Federated learning – a type of machine learning that allows for training on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data.
- First-order logic (also known as first-order predicate calculus and predicate logic) – a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is a variable. This distinguishes it from propositional logic, which does not use quantifiers or relations.
- Fluent – a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.
- Formal language – a set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.
- Forward chaining – (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.
- Frame – an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations." Frames are the primary data structure used in artificial intelligence frame language.
- Frame language – a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.
- Frame problem – is the problem of finding adequate collections of axioms for a viable description of a robot environment.
- Friendly artificial intelligence (also friendly AI or FAI) – a hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.
- Futures studies – is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.
- Fuzzy control system – a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).
- Fuzzy logic – a simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only.
- Fuzzy rule – Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables.
- Fuzzy set – In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition — an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.
- Game theory – is the study of mathematical models of strategic interaction between rational decision-makers.
- Generative adversarial network – (GAN), is a class of machine learning systems. Two neural networks contest with each other in a zero-sum game framework.
- Genetic algorithm – (GA), is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection.
- Genetic operator – is an operator used in genetic algorithms to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful.
- Glowworm swarm optimization – is a swarm intelligence optimization algorithm developed based on the behaviour of glowworms (also known as fireflies or lightning bugs).
- Graph (abstract data type) – In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from mathematics; specifically, the field of graph theory.
- Graph (discrete mathematics) – In mathematics, and more specifically in graph theory, a graph is a structure amounting to a set of objects in which some pairs of the objects are in some sense "related". The objects correspond to mathematical abstractions called vertices (also called nodes or points) and each of the related pairs of vertices is called an edge (also called an arc or line).
- Graph database – (GDB), is a database that uses graph structures for semantic queries with nodes, edges and properties to represent and store data. A key concept of the system is the graph (or edge or relationship), which directly relates data items in the store a collection of nodes of data and edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly, and in many cases retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships within a graph database is fast because they are perpetually stored within the database itself. Relationships can be intuitively visualized using graph databases, making it useful for heavily inter-connected data.
- Graph theory – is the study of graphs, which are mathematical structures used to model pairwise relations between objects.
- Graph traversal – (also known as graph search) refers to the process of visiting (checking and/or updating) each vertex in a graph. Such traversals are classified by the order in which the vertices are visited. Tree traversal is a special case of graph traversal.
- Heuristic – is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.
- Hidden layer – an internal layer of neurons in an artificial neural network, not dedicated to input or output
- Hidden unit – an neuron in a hidden layer in an artificial neural network
- Hyper-heuristic – is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.
- IEEE Computational Intelligence Society – is a professional society of the Institute of Electrical and Electronics Engineers (IEEE) focussing on "the theory, design, application, and development of biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained".
- Incremental learning – is a method of machine learning, in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.
- Inference engine – is a component of the system that applies logical rules to the knowledge base to deduce new information.
- Information integration – (II), is the merging of information from heterogeneous sources with differing conceptual, contextual and typographical representations. It is used in data mining and consolidation of data from unstructured or semi-structured resources. Typically, information integration refers to textual representations of knowledge but is sometimes applied to rich-media content. Information fusion, which is a related term, involves the combination of information into a new set of information towards reducing redundancy and uncertainty.
- Information Processing Language – (IPL), is a programming language that includes features intended to help with programs that perform simple problem solving actions such as lists, dynamic memory allocation, data types, recursion, functions as arguments, generators, and cooperative multitasking. IPL invented the concept of list processing, albeit in an assembly-language style.
- Intelligence amplification – (IA), (also referred to as cognitive augmentation, machine augmented intelligence and enhanced intelligence), refers to the effective use of information technology in augmenting human intelligence.
- Intelligence explosion – is a possible outcome of humanity building artificial general intelligence (AGI). AGI would be capable of recursive self-improvement leading to rapid emergence of ASI (artificial superintelligence), the limits of which are unknown, at the time of the technological singularity.
- Intelligent agent – (IA), is an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex.
- Intelligent control – is a class of control techniques that use various artificial intelligence computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, reinforcement learning, evolutionary computation and genetic algorithms.
- Intelligent personal assistant – A virtual assistant or intelligent personal assistant is a software agent that can perform tasks or services for an individual based on verbal commands. Sometimes the term "chatbot" is used to refer to virtual assistants generally or specifically accessed by online chat (or in some cases online chat programs that are exclusively for entertainment purposes). Some virtual assistants are able to interpret human speech and respond via synthesized voices. Users can ask their assistants questions, control home automation devices and media playback via voice, and manage other basic tasks such as email, to-do lists, and calendars with verbal commands.
- Interpretation (logic) – is an assignment of meaning to the symbols of a formal language. Many formal languages used in mathematics, logic, and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics.
- Issue tree – Also called logic tree, is a graphical breakdown of a question that dissects it into its different components vertically and that progresses into details as it reads to the right. Issue trees are useful in problem solving to identify the root causes of a problem as well as to identify its potential solutions. They also provide a reference point to see how each piece fits into the whole picture of a problem.
- Junction tree algorithm – (also known as 'Clique Tree') is a method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches.
- Kernel method – In machine learning, kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (for example clusters, rankings, principal components, correlations, classifications) in datasets.
- KL-ONE – is a well known knowledge representation system in the tradition of semantic networks and frames; that is, it is a frame language. The system is an attempt to overcome semantic indistinctness in semantic network representations and to explicitly represent conceptual information as a structured inheritance network.
- Knowledge acquisition – is the process used to define the rules and ontologies required for a knowledge-based system. The phrase was first used in conjunction with expert systems to describe the initial tasks associated with developing an expert system, namely finding and interviewing domain experts and capturing their knowledge via rules, objects, and frame-based ontologies.
- Knowledge-based systems – A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. The term is broad and refers to many different kinds of systems. The one common theme that unites all knowledge based systems is an attempt to represent knowledge explicitly and a reasoning system that allows it to derive new knowledge. Thus, a knowledge-based system has two distinguishing features: a knowledge base and an inference engine.
- Knowledge engineering – (KE) refers to all technical, scientific and social aspects involved in building, maintaining and using knowledge-based systems.
- Knowledge extraction – is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL (data warehouse), the main criteria is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge (reusing identifiers or ontologies) or the generation of a schema based on the source data.
- Knowledge Interchange Format – (KIF) is a computer language designed to enable systems to share and re-use information from knowledge-based systems. KIF is similar to frame languages such as KL-ONE and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc.
- Knowledge representation and reasoning – (KR², KR&R) is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets. Examples of knowledge representation formalisms include semantic nets, systems architecture, frames, rules, and ontologies. Examples of automated reasoning engines include inference engines, theorem provers, and classifiers.
- Lazy learning – In machine learning, lazy learning is a learning method in which generalization of the training data is, in theory, delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries.
- Lisp (programming language) – (historically LISP), is a family of computer programming languages with a long history and a distinctive, fully parenthesized prefix notation.
- Logic programming – is a type of programming paradigm which is largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, Answer set programming (ASP) and Datalog.
- Long short-term memory – (LSTM), is an artificial recurrent neural network, (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, LSTM has feedback connections that make it a "general purpose computer" (that is, it can compute anything that a Turing machine can). It can not only process single data points (such as images), but also entire sequences of data (such as speech or video).
- Machine vision – (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision is a term encompassing a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environments such as security and vehicle guidance.
- Markov chain – is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event.
- Markov decision process – (MDP) is a discrete time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming and reinforcement learning.
- Mathematical optimization – In mathematics, computer science and operations research, mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element (with regard to some criterion) from some set of available alternatives.
- Machine learning – (ML) is the scientific study of algorithms and statistical models that computer systems use in order to perform a specific task effectively without using explicit instructions, relying on patterns and inference instead.
- Machine listening – Computer audition (CA) or machine listening is general field of study of algorithms and systems for audio understanding by machine.
- Machine perception – is the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses to relate to the world around them.
- Mechanism design – is a field in economics and game theory that takes an engineering approach to designing economic mechanisms or incentives, toward desired objectives, in strategic settings, where players act rationally. Because it starts at the end of the game, then goes backwards, it is also called reverse game theory. It has broad applications, from economics and politics (markets, auctions, voting procedures) to networked-systems (internet interdomain routing, sponsored search auctions).
- Mechatronics – which is also called mechatronic engineering, is a multidisciplinary branch of engineering that focuses on the engineering of both electrical and mechanical systems, and also includes a combination of robotics, electronics, computer, telecommunications, systems, control, and product engineering.
- Metabolic network reconstruction and simulation – allows for an in-depth insight into the molecular mechanisms of a particular organism. In particular, these models correlate the genome with molecular physiology.
- Metaheuristic – In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics sample a set of solutions which is too large to be completely sampled.
- Model checking – In computer science, model checking or property checking is, for a given model of a system, exhaustively and automatically checking whether this model meets a given specification. Typically, one has hardware or software systems in mind, whereas the specification contains safety requirements such as the absence of deadlocks and similar critical states that can cause the system to crash. Model checking is a technique for automatically verifying correctness properties of finite-state systems.
- Modus ponens – In propositional logic, modus ponens is a rule of inference. It can be summarized as "P implies Q and P is asserted to be true, therefore Q must be true."
- Modus tollens – In propositional logic, modus tollens is a valid argument form and a rule of inference. It is an application of the general truth that if a statement is true, then so is its contrapositive. The inference rule modus tollens asserts that the inference from P implies Q to the negation of Q implies the negation of P is valid.
- Monte Carlo tree search – In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes.
- Multi-agent system – (MAS or "self-organized system"), is a computerized system composed of multiple interacting intelligent agents. Multi-agent systems can solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. Intelligence may include methodic, functional, procedural approaches, algorithmic search or reinforcement learning.
- Multi-swarm optimization – is a variant of particle swarm optimization (PSO) based on the use of multiple sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that each sub-swarm focuses on a specific region while a specific diversification method decides where and when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on multi-modal problems, where multiple (local) optima exist.
- Mutation – is a genetic operator used to maintain genetic diversity from one generation of a population of genetic algorithm chromosomes to the next. It is analogous to biological mutation. Mutation alters one or more gene values in a chromosome from its initial state. In mutation, the solution may change entirely from the previous solution. Hence GA can come to a better solution by using mutation. Mutation occurs during evolution according to a user-definable mutation probability. This probability should be set low. If it is set too high, the search will turn into a primitive random search.
- Mycin – was an early backward chaining expert system that used artificial intelligence to identify bacteria causing severe infections, such as bacteremia and meningitis, and to recommend antibiotics, with the dosage adjusted for patient's body weight – the name derived from the antibiotics themselves, as many antibiotics have the suffix "-mycin". The MYCIN system was also used for the diagnosis of blood clotting diseases.
- Naive Bayes classifier – In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features.
- Naive semantics – is an approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas.
- Name binding – In programming languages, name binding is the association of entities (data and/or code) with identifiers. An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier
id
in a context that establishes a binding forid
is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences. - Named-entity recognition – (NER), (also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.
- Named graph – Named graphs are a key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI, allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data model through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large.
- Natural language generation – (NLG), is a software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system.
- Natural language processing – (NLP), is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
- Natural language programming – is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English.
- Network motif – All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns.
- Neural machine translation – (NMT), is an approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.
- Neural Turing machine – (NTMs) is a recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent. An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.
- Neuro-fuzzy – refers to combinations of artificial neural networks and fuzzy logic.
- Neurocybernetics – A brain–computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
- Neuromorphic engineering – also known as neuromorphic computing, is a concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors.
- Node – is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.
- Nondeterministic algorithm – is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
- Nouvelle AI – Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world," instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them.
- NP – In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time.
- NP-completeness – In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time), such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty.
- NP-hardness – (non-deterministic polynomial-time hardness), in computational complexity theory, is the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.
- Occam's razor – (also Ockham's razor or Ocham's razor), is the problem-solving principle that states that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions and is not meant to filter out hypotheses that make different predictions. The idea is attributed to English Franciscan friar William of Ockham (c. 1287–1347), a scholastic philosopher and theologian.
- Offline learning –
- Online machine learning – is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time.
- Ontology learning – (ontology extraction, ontology generation, or ontology acquisition), is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval.
- OpenAI – is the for-profit corporation OpenAI LP, whose parent organization is the non-profit organization OpenAI Inc that conducts research in the field of artificial intelligence (AI) with the stated aim to promote and develop friendly AI in such a way as to benefit humanity as a whole.
- OpenCog – is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system.
- Open Mind Common Sense – is an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web.
- Open-source software – (OSS), is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration.
- Partial order reduction – is a technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders.
- Partially observable Markov decision process – (POMDP), is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.
- Particle swarm optimization – (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
- Pathfinding – or pathing, is the plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra's algorithm for finding a shortest path on a weighted graph.
- Pattern recognition – is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.
- Predicate logic – First-order logic—also known as predicate logic and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable.[174] This distinguishes it from propositional logic, which does not use quantifiers or relations; in this sense, propositional logic is the foundation of first-order logic.
- Predictive analytics – encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.
- Principal component analysis – (PCA), is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
- Principle of rationality – (or rationality principle), was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework. It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism. According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational analysis.
- Probabilistic programming – (PP), is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically. It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable. It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as "Probabilistic programming languages" (PPLs).
- Production system – is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior but it also includes the mechanism necessary to follow those rules as the system responds to states of the world.
- Programming language – is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.
- Prolog – is a logic programming language associated with artificial intelligence and computational linguistics. Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.
- Propositional calculus – is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
- Python – is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.
- Qualification problem – In philosophy and artificial intelligence (especially, knowledge-based systems), the qualification problem is concerned with the impossibility of listing all the preconditions required for a real-world action to have its intended effect. It might be posed as how to deal with the things that prevent me from achieving my intended result. It is strongly connected to, and opposite the ramification side of, the frame problem.
- Quantifier – In logic, quantification specifies the quantity of specimens in the domain of discourse that satisfy an open formula. The two most common quantifiers mean "for all" and "there exists". For example, in arithmetic, quantifiers allow one to say that the natural numbers go on forever, by writing that for all n (where n is a natural number), there is another number (say, the successor of n) which is one bigger than n.
- Quantum computing – is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.
- Query language – Query languages or data query languages (DQLs) are computer languages used to make queries in databases and information systems. Broadly, query languages can be classified according to whether they are database query languages or information retrieval query languages. The difference is that a database query language attempts to give factual answers to factual questions, while an information retrieval query language attempts to find documents containing information that is relevant to an area of inquiry.
- R programming language – is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
- Radial basis function network – In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
- Random forest – Random forests, or random decision forests, are an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random decision forests correct for decision trees' habit of overfitting to their training set.
- Reasoning system – In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems.
- Recurrent neural network – (RNN), is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.
- Region connection calculus – (RCC), is intended to serve for qualitative spatial representation and reasoning. RCC abstractly describes regions (in Euclidean space, or in a topological space) by their possible relations to each other.
- Reinforcement learning – (RL), is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning. It differs from supervised learning in that labelled input/output pairs need not be presented, and sub-optimal actions need not be explicitly corrected. Instead the focus is finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
- Reservoir computing – is a framework for computation that may be viewed as an extension of neural networks. Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines and echo state networks are two major types of reservoir computing.
- Resource Description Framework – (RDF), is a family of World Wide Web Consortium (W3C) specifications originally designed as a metadata data model. It has come to be used as a general method for conceptual description or modeling of information that is implemented in web resources, using a variety of syntax notations and data serialization formats. It is also used in knowledge management applications.
- Restricted Boltzmann machine – (RBM), is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs.
- Rete algorithm – is a pattern matching algorithm for implementing rule-based systems. The algorithm was developed to efficiently apply many rules or patterns to many objects, or facts, in a knowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts.
- Robotics – is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronic engineering, information engineering, computer science, and others. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing.
- Rule-based system – In computer science, a rule-based system is used to store and manipulate knowledge to interpret information in a useful way. It is often used in artificial intelligence applications and research. Normally, the term rule-based system is applied to systems involving human-crafted or curated rule sets. Rule-based systems constructed using automatic rule inference, such as rule-based machine learning, are normally excluded from this system type.
- Satisfiability – In mathematical logic, satisfiability and validity are elementary concepts of semantics. A formula is satisfiable if it is possible to find an interpretation (model) that makes the formula true. A formula is valid if all interpretations make the formula true. The opposites of these concepts are unsatisfiability and invalidity, that is, a formula is unsatisfiable if none of the interpretations make the formula true, and invalid if some such interpretation makes the formula false. These four concepts are related to each other in a manner exactly analogous to Aristotle's square of opposition.
- Search algorithm – is any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a problem domain, either with discrete or continuous values.
- Selection – is the stage of a genetic algorithm in which individual genomes are chosen from a population for later breeding (using the crossover operator).
- Self-management – is the process by which computer systems shall manage their own operation without human intervention.
- Semantic network – or frame network, is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields.
- Semantic reasoner –A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining.
- Semantic query – allows for queries and analytics of associative and contextual nature. Semantic queries enable the retrieval of both explicitly and implicitly derived information based on syntactic, semantic and structural information contained in data. They are designed to deliver precise results (possibly the distinctive selection of one single piece of information) or to answer more fuzzy and wide open questions through pattern matching and digital reasoning.
- Semantics – In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically valid strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically invalid strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will be executed on a certain platform, hence creating a model of computation.
- Sensor fusion – is combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.
- Separation logic – is an extension of Hoare logic, a way of reasoning about programs. The assertion language of separation logic is a special case of the logic of bunched implications (BI).
- Similarity learning – is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn from a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification.
- Simulated annealing – (SA), is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem.
- Situated approach – In artificial intelligence research, the situated approach builds agents that are designed to behave effectively successfully in their environment. This requires designing AI "from the bottom-up" by focussing on the basic perceptual and motor skills required to survive. The situated approach gives a much lower priority to abstract reasoning or problem-solving skills.
- Situation calculus – is a logic formalism designed for representing and reasoning about dynamical domains.
- SLD resolution – (Selective Linear Definite clause resolution), is the basic inference rule used in logic programming. It is a refinement of resolution, which is both sound and refutation complete for Horn clauses.
- Software – Computer software, or simply software, is a collection of data or computer instructions that tell the computer how to work. This is in contrast to physical hardware, from which the system is built and actually performs the work. In computer science and software engineering, computer software is all information processed by computer systems, programs and data. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media.
- Software engineering – is the application of engineering to the development of software in a systematic method.
- Spatial-temporal reasoning – is an area of artificial intelligence which draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind. The applied goal—on the computing side—involves developing high-level control systems of automata for navigating and understanding time and space.
- SPARQL – is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format.
- Speech recognition – is an interdisciplinary subfield of computational linguistics that develops methodologies and technologies that enables the recognition and translation of spoken language into text by computers. It is also known as automatic speech recognition (ASR), computer speech recognition or speech to text (STT). It incorporates knowledge and research in the linguistics, computer science, and electrical engineering fields.
- Spiking neural network – (SNNs), are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model.
- State – In information technology and computer science, a program is described as stateful if it is designed to remember preceding events or user interactions; the remembered information is called the state of the system.
- Statistical classification – In machine learning and statistics, classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known. Examples are assigning a given email to the "spam" or "non-spam" class, and assigning a diagnosis to a given patient based on observed characteristics of the patient (sex, blood pressure, presence or absence of certain symptoms, etc.). Classification is an example of pattern recognition.
- Statistical relational learning – (SRL), is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure. Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming.
- Stochastic optimization – Stochastic optimization, (SO), methods are optimization methods that generate and use random variables. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems.
- Stochastic semantic analysis – is an approach used in computer science as a semantic component of natural language understanding. Stochastic models generally use the definition of segments of words as basic semantic units for the semantic models, and in some cases involve a two layered approach.
- Stanford Research Institute Problem Solver (STRIPS) – is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International.
- Subject-matter expert – a person who is an authority in a particular area or topic.
- Superintelligence – is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Superintelligence may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
- Supervised learning – is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way.
- Support-vector machines – In machine learning, support-vector machines (SVMs, also support-vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
- Swarm intelligence – (SI), is the collective behavior of decentralized, self-organized systems, natural or artificial. The expression was introduced in the context of cellular robotic systems.
- Symbolic artificial intelligence – is the term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic and search.
- Synthetic intelligence –(SI), is an alternative term for artificial intelligence which emphasizes that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence.
- Systems neuroscience – is a subdiscipline of neuroscience and systems biology that studies the structure and function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how nerve cells behave when connected together to form neural pathways, neural circuits, and larger brain networks.
- Technological singularity – (also, simply, the singularity) is a hypothetical point in the future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization.
- Temporal difference learning – (TD) learning refers to a class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods.
- Tensor network theory – is a theory of brain function (particularly that of the cerebellum) that provides a mathematical model of the transformation of sensory space-time coordinates into motor coordinates and vice versa by cerebellar neuronal networks. The theory was developed as a geometrization of brain function (especially of the central nervous system) using tensors.
- TensorFlow – is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.
- Theoretical computer science – (TCS), is a subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation.
- Theory of computation – In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
- Thompson sampling – is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists in choosing the action that maximizes the expected reward with respect to a randomly drawn belief.
- Time complexity – is the computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor.
- Transhumanism – (abbreviated as H+ or h+), is an international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology.
- Transition system – In theoretical computer science, a transition system is a concept used in the study of computation. It is used to describe the potential behavior of discrete systems. It consists of states and transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is a singleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible.
- Tree traversal – (also known as tree search), is a form of graph traversal and refers to the process of visiting (checking and/or updating) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited.
- True quantified Boolean formula – In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified SAT).
- Turing test – developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.
- Type system – In programming languages, a type system is a set of rules that assigns a property called type to the various constructs of a computer program, such as variables, expressions, functions or modules. These types formalize and enforce the otherwise implicit categories the programmer uses for algebraic data types, data structures, or other components (e.g. "string", "array of float", "function returning boolean"). The main purpose of a type system is to reduce possibilities for bugs in computer programs by defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination of static and dynamic checking. Type systems have other purposes as well, such as expressing business rules, enabling certain compiler optimizations, allowing for multiple dispatch, providing a form of documentation, etc.
- Unsupervised learning – is a type of self-organized Hebbian learning that helps find previously unknown patterns in data set without pre-existing labels. It is also known as self-organization and allows modeling probability densities of given inputs. It is one of the main three categories of machine learning, along with supervised and reinforcement learning. Semi-supervised learning has also been described and is a hybridization of supervised and unsupervised techniques.
- Vision processing unit – (VPU), is a type of microprocessor designed to accelerate machine vision tasks.
- Value-alignment complete – Analogous to an AI-complete problem, a value-alignment complete problem is a problem where the AI control problem needs to be fully solved to solve it.
- Watson – is a question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO, industrialist Thomas J. Watson.
- Weak AI – (weak AI), also known as narrow AI, is artificial intelligence that is focused on one narrow task.
- World Wide Web Consortium – (W3C), is the main international standards organization for the World Wide Web (abbreviated WWW or W3).
- 1959: AI designed to be a General Problem Solver failed to solve real world problems.
- 1982: Software designed to make discoveries, discovered how to cheat instead.
- 1983: Nuclear attack early warning system falsely claimed that an attack is taking place.
- 2010: Complex AI stock trading software caused a trillion dollar flash crash.
- 2011: E-Assistant told to "call me an ambulance" began to refer to the user as Ambulance.
- 2013: Object recognition neural networks saw phantom objects in particular noise images.
- 2015: An automated email reply generator created inappropriate responses, such as writing "I love you" to a business colleague.
- 2015: A robot for grabbing auto parts grabbed and killed a man.
- 2015: Image tagging software classified black people as gorillas.
- 2015: Medical AI classified patients with asthma as having a lower risk of dying of pneumonia.
- 2015: Adult content filtering software failed to remove inappropriate content, exposing children to violent and sexual content.
- 2016: AI designed to predict recidivism acted racist.
- 2016: An AI agent exploited a reward signal to win a game without actually completing the game.
- 2016: Video game NPCs (non-player characters, or any character that is not controlled by a human player) designed unauthorized super weapons.
- 2016: AI judged a beauty contest and rated dark-skinned contestants lower.
- 2016: A mall security robot collided with and injured a child.
- 2016: The AI "Alpha Go" lost to a human in a world-championship-level game of "Go."
- 2016: A self-driving car had a deadly accident.
- 2017: Google Translate shows gender bias in Turkish-English translations.
- 2017: Facebook chat bots shut down after developing their own language.
- 2017: Autonomous van in accident on its first day.
- 2017: Google Allo suggested man in turban emoji as response to a gun emoji.
- 2017: Face ID beat by a mask.
- 2017: AI misses the mark with Kentucky Derby predictions.
- 2017: Google Home Minis spied on their owners.
- 2017: Google Home outage causes near 100% failure rate.
- 2017: Facebook allowed ads to be targeted to "Jew Haters".
- 2018: Chinese billionaire's face identified as jaywalker.
- 2018: Uber self-driving car kills a pedestrian.
- 2018: Amazon AI recruiting tool is gender biased.
- 2018: Google Photo confuses skier and mountain.
- 2018: LG robot Cloi gets stagefright at its unveiling.
- 2018: IBM Watson comes up short in healthcare.
While these are only a few instances of failures that have been observed so far, they are pieces of evidence to the fact that Artificial intelligence (the simulation of human intelligence processes by machines, especially computer systems) has the potential to develop a will of its own that may be in conflict with members of the human race. This is definitely a warning about the potential dangers of Artificial intelligence which should be addressed while exploring its potential interests.
Artificial intelligence in general, context remains a challenge. Despite Its Many Failures, why is artificial intelligence important?
- Artificial intelligence automates repetitive learning and discovery through data.
- Artificial intelligence analyzes more and deeper data.
- Artificial intelligence adds intelligence to existing products.
- Artificial intelligence adapts through progressive learning algorithms to let the data do the programming.
- Artificial intelligence gets the most out of data.
- Artificial intelligence achieves unbelievable accuracy through deep neural networks – which was previously impossible. For example, your interactions with Amazon Alexa, Google Search and Google Photos are all based on deep learning – and they keep getting more precise the more we use them.
The threat of AI-charged job loss is spreading (AI and automation will eliminate the most mundane tasks). No matter what industry you’re in, AI-powered bots (which can answer common questions and point users to FAQs and knowledge base articles) and software are taking a crack at it. Artificial intelligence seems to be ringing the death sound of a bell for all manner of jobs, tasks, chores and activities. From hospitality, to customer service, to home assistants, no job feels safe. Naturally, this has made people worried about the future. But is Artificial intelligence ready to take over our jobs, or even likely to do so ever? Prevalent AI- charged failures would suggest not.
- Artificial Creativity
- Artificial life
- Automated planning and scheduling
- Automated reasoning
- Automation
- Automatic target recognition
- Biologically inspired computing
- Computer Audition
- Computer vision
- Diagnosis
- Expert system
- Game artificial intelligence
- Hybrid intelligent system
- Intelligent agent
- Intelligent control
- Knowledge management
- Concept mining
- E-mail spam Filtering
- Information extraction
- Activity recognition
- Image retrieval
- Named-entity extraction : Feature automatically aims to extract phrases from plain text that correpond to entities.
- Knowledge representation
- Semantic Web
- Machine learning
- Natural language processing
- Nonlinear control
- Pattern recognition
- Robotics
- Speech generating device
- Strategic planning
- Vehicle infrastructure integration
- Virtual Intelligence
- Virtual reality
Marek Rosa is a Slovak video game programmer, designer, producer and entrepreneur. He is the CEO and founder of Keen Software House, an independent game development studio that produces the games Space Engineers. and Medieval Engineers. He is also the founder, CEO and CTO of GoodAI, a research and development company building general artificial intelligence.
Critics of AI
- Stephen Hawking – AI could spell end of human race "Hawking warns AI 'could spell end of human race'". phys.org. Phys.org. 3 December 2014. Retrieved 20 April 2015.
Some examples of artificially intelligent entities depicted in science fiction include:
- AC created by merging 2 AIs in the Sprawl trilogy by William Gibson
- Agents in the simulated reality known as "The Matrix" in The Matrix franchise
- Agent Smith, began as an Agent in The Matrix, then became a renegade program of overgrowing power that could make copies of itself like a self-replicating computer virus
- AM (Allied Mastercomputer), the antagonist of Harlan Ellison's short novel I Have No Mouth, and I Must Scream
- Amusement park robots (with pixilated consciousness) that went homicidal in Westworld and Futureworld
- Angel F (2007)
- Arnold Rimmer – computer-generated sapient hologram, aboard the Red Dwarf deep space ore hauler
- Ash – android crew member of the Nostromo starship in the movie Alien
- Ava – humanoid robot in Ex Machina
- Bishop, android crew member aboard the U.S.S. Sulaco in the movie Aliens
- C-3PO, protocol droid featured in all the Star Wars movies
- Chappie in the movie CHAPPiE
- Cohen and other Emergent AIs in Chris Moriarty's Spin Series
- Colossus – fictitious supercomputer that becomes sentient and then takes over the world; from the series of novels by Dennis Feltham Jones, and the movie Colossus: The Forbin Project (1970)
- Commander Data in Star Trek: The Next Generation
- Cortana and other "Smart AI" from the Halo series of games
- Cylons – genocidal robots with resurrection ships that enable the consciousness of any Cylon within an unspecified range to download into a new body aboard the ship upon death. From Battlestar Galactica.
- Erasmus – baby killer robot that incited the Butlerian Jihad in the Dune franchise
- HAL 9000 (1968) – paranoid "Heuristically programmed ALgorithmic" computer from 2001: A Space Odyssey, that attempted to kill the crew because it believed they were trying to kill it.
- Holly – ship's computer with an IQ of 6000 and a sense of humor, aboard the Red Dwarf
- In Greg Egan's novel Permutation City the protagonist creates digital copies of himself to conduct experiments that are also related to implications of artificial consciousness on identity
- Jane in Orson Scott Card's Speaker for the Dead, Xenocide, Children of the Mind, and Investment Counselor
- Johnny Five from the movie Short Circuit
- Joshua from the movie War Games
- Keymaker, an "exile" sapient program in The Matrix franchise
- "Machine" – android from the film The Machine, whose owners try to kill her after they witness her conscious thoughts, out of fear that she will design better androids (intelligence explosion)
- Mimi, humanoid robot in Real Humans - "Äkta människor" (original title) 2012
- Omnius, sentient computer network that controlled the Universe until overthrown by the Butlerian Jihad in the Dune franchise
- Operating Systems in the movie Her
- Puppet Master in Ghost in the Shell manga and anime
- R2-D2, exciteable astromech droid featured in all the Star Wars movies
- Replicants – biorobotic androids from the novel Do Androids Dream of Electric Sheep? and the movie Blade Runner which portray what might happen when artificially conscious robots are modeled very closely upon humans
- Roboduck, combat robot superhero in the NEW-GEN comic book series from Marvel Comics
- Robots in Isaac Asimov's Robot series
- Robots in The Matrix franchise, especially in The Animatrix
- Samaritan in the Warner Brothers Television series "Person of Interest"; a sentient AI which is hostile to the main characters and which surveils and controls the actions of government agencies in the belief that humans must be protected from themselves, even by killing off "deviants"
- Skynet (1984) – fictional, self-aware artificially intelligent computer network in the Terminator franchise that wages total war with the survivors of its nuclear barrage upon the world.
- "Synths" are a type of android in the video game Fallout 4. There is a faction in the game known as "the Railroad" which believes that, as conscious beings, synths have their own rights. The Institute, the lab that produces the synths, mostly does not believe they are truly conscious and attributes any apparent desires for freedom as a malfunction.
- TARDIS, time machine and spacecraft of Doctor Who, sometimes portrayed with a mind of its own
- Terminator (1984) – (also known as the T-800, T-850 or Model 101) refers to a number of fictional cyborg characters from the Terminator franchise. The Terminators are robotic infiltrator units covered in living flesh, so as be indiscernible from humans, assigned to terminate specific human targets.
- The Bicentennial Man, an android in Isaac Asimov's Foundation universe
- The Geth in Mass Effect
- The Machine in the television series Person of Interest; a sentient AI which works with its human designer to protect innocent people from violence. Later in the series it is opposed by another, more ruthless, artificial super intelligence, called "Samaritan".
- The Minds in Iain M. Banks' Culture novels.
- The Oracle, sapient program in The Matrix franchise
- The sentient holodeck character Professor James Moriarty in the Ship in a Bottle episode from Star Trek: The Next Generation
- The Ship (the result of a large-scale AC experiment) in Frank Herbert's Destination: Void and sequels, despite past edicts warning against "Making a Machine in the Image of a Man's Mind."
- The terminator cyborgs from the Terminator franchise, with visual consciousness depicted via first-person perspective
- The uploaded mind of Dr. Will Caster – which presumably included his consciousness, from the film Transcendence
- Transformers, sentient robots from the entertainment franchise of the same name
- V.I.K.I. – (Virtual Interactive Kinetic Intelligence), a character from the film I, Robot. VIKI is an artificially intelligent supercomputer programmed to serve humans, but her interpretation of the Three Laws of Robotics causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself.
- Vanamonde in Arthur C. Clarke's The City and the Stars—an artificial being that was immensely powerful but entirely childlike.
- WALL-E, a robot and the title character in WALL-E
- TAU in Netflix's original programming feature film 'TAU'--an advanced AI computer who befriends and assists a female research subject held against her will by an AI research scientist.
Top 23 Best AI Science Fiction Books
- 2001: A Space Odyssey
- Neuromancer
- I, Robot
- Berserker Man
- The Lifecycle Of Software Objects
- Do Androids Dream of Electric Sheep?
- Ender's Game
- City of Golden Shadow
- Excession
- Daemon
- The Adolescence of P-1
- The Two Faces of Tomorrow
- Destination: Void
- House of Suns
- Hyperion
- Newton's Wake: A Space Opera
- Pandora's Star
- The Metamorphosis of Prime Intellect
- Queen of Angels
- The Diamond Age
- The Moon is a Harsh Mistress
- When Harlie Was One
- Www: Wake
The 18 Best Books About AI
- Affective Computing: Focus on Emotion Expression, Synthesis and Recognition
- Common LISP: A Gentle Introduction to Symbolic Computation
- Planning Algorithms
- Artificial Intelligence: Foundations of Computational Agents
- A Course in Machine Learning
- Clever Algorithms: Nature-Inspired Programming Recipes
- Deep Learning with R
- Essentials of Metaheuristics
- From Bricks to Brains: The Embodied Cognitive Science of LEGO Robots
- Logic For Computer Science: Foundations of Automatic Theorem Proving
- Life 3.0: Being Human in the Age of Artificial Intelligence
- Our Final Invention: Artificial Intelligence and the End of the Human Era
- Artificial Intelligence: A Modern Approach
- Python Machine Learning
- The Quest for Artificial Intelligence: A History of Ideas and Achievements
- Simply Logical: Intelligent Reasoning by Example
- Superintelligence
- Virtual Reality for Human Computer Interaction
- Language Identification in the Limit
- A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence - August 31, 1955
- Artificial Intelligence: A Modern Approach
- Learning representations by back-propagating errors
- A Training Algorithm for Optimal Margin Classifiers
- Knowledge-based Analysis of Microarray Gene Expression Data By Using Support Vector Machines
- Cryptographic Limitations on Learning Boolean Formulae and Finite Automata
- Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
- A fast learning algorithm for deep belief nets
- Learnability and the Vapnik-Chervonenkis Dimension
- An inductive inference machine
- Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm
- Induction of Decision Trees
- The Strength of Weak Learnability
- Learning to Predict by the Methods of Temporal Differences
- Computing Machinery and Intelligence
- A Theory of the Learnable
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Fuzzy sets
- Computer Chess Compendium
- Artificial brain
- Philosophical views of artificial consciousness
- Artificial intelligence and law
- Chinese room
- Cognitive science
- Ethics of artificial intelligence
- Philosophy of the Mind
- Physical symbol system
- Synthetic intelligence
- Transhumanism
- Turing Test
Publications:
- Rounding-Off Errors in Matrix Processes
- Can automatic calculating machines be said to think?
- Computability and λ-definability
- Digital computers applied to games
- Can a machine think?
- The Chemical Basis of Morphogenesis
- Systems of Logic Based on Ordinals
- Computable Numbers, with an Application to the Entscheidungsproblem
- Intelligent Machinery
- Some Calculations of the Riemann Zeta‐Function
- Lecture to the London Mathematical Society on 20 February 1947
- Computing Machinery and Intelligence
Publications:
- Proof of the Quasi-Ergodic Hypothesis
- On infinite direct products
- Continuous Geometry
- First Draft of a Report on the EDVAC
- Functional Operators: Measures and integrals
- Distribution of the Ratio of the Mean Square Successive Difference to the Variance
- Numerical Inverting of Matrices of High Order
- Numerical Integration of the Barotropic Vorticity Equation
- The General and Logical Theory of Automata
- Various Techniques Used in Connection With Random Digits
- Planning and Coding of Problems for an Electronic Computing Instrument
- On Rings of Operators II
- On an algebraic generalization of the quantum mechanical formalism (Part 1)
- The Computer and the Brain
- The Logic of Quantum Mechanics
- Theory of Games and Economic Behavior
- Theory of Self-Reproducing Automata
- Probabilistic logics and synthesis of reliable organisms from unreliable components
- A Model of General Economic Equilibrium
- Can We Survive Technology?
- On Complete Topological Spaces
- Fourier Integrals and Metric Geometry
- The Statistics of the Gravitational Field Arising from a Random Distribution of Stars
- Statistical Methods in Neutron Diffusion
- Physical Applications of the Ergodic Hypothesis
- On Regular Rings
Publications:
- Generalized Harmonic Analysis
- Extrapolation, Interpolation, and Smoothing of Stationary Time Series: With Engineering Applications
- Nonlinear Problems in Random Theory
- Cybernetics: Or Control and Communication in the Animal and the Machine
- Wiener on the Fourier Integral
- The Human Use of Human Beings
Publications:
- A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955
- The Philosophy of PCM
- Scientific Aspects of Juggling
- Collected Papers of Claude Elwood Shannon
- A Mathematical Theory of Communication
- Computability and Probabilistic machines
- A Symmetrical Notation for Numbers
- A Universal Turing Machine with Two Internal States
- Prediction and Entropy of Printed English
- Communication in the Presence of Noise
- Zero error capacity of a noisy channel
- A Symbolic Analysis of Relay and Switching Circuits
- A Mathematical Theory of Cryptography
- Coding Theorems for a Discrete Source With a Fidelity Criterion
- Lower Bounds to Error Probability for Coding on Discrete Mernoryless Channels. I
- Lower Bounds to Error Probability for Coding on Discrete Memoryless Channels. II
- Communication Theory of Secrecy Systems
- Two-way communication channels
- Programming a Computer for Playing Chess
Publications:
- Anatomy and Physiology of Vision in the Frog (Rana pipiens)
- Chemical transmission in the nose of the frog
- What the Frog's Eye Tells the Frog's Brain
- A Logical Calculus of the Ideas Immanent in Nervous Activity
- Why the Mind Is in the Head
- Warren S. McCulloch Papers
- A heterarchy of values determined by the topology of nervous nets
- Recollections of the Many Sources of Cybernetics
- What is a number that a man may know it, and a man, that he may know a number?
Publications:
- Generality in Artificial Intelligence
- From here to human-level AI
- Notes on formalizing context
- Epistemological Problems of Artificial Intelligence
- In Memoriam: Arthur Samuel: Pioneer in Machine Learning
- Programs with Common Sense
- An architecture of diversity for commonsense reasoning
- Making Robots Conscious of their Mental States
- What has AI in Common with Philosophy?
- Report on the Algorithmic Language ALGOL 60
- Applications of Circumscription to Formalizing Common Sense Knowledge
- Ascribing Mental Qualities to Machines
- Circumscription — A Form of Non-Monotonic Reasoning
- Revised Report on the Algorithmic Language Algol 60
- The Common Business Communication Language
- The well-designed child
- Correctness of a Compiler for Arithmetic Expressions
- First Order Theories of Individual Concepts and Propositions
- Queue- based Multi-processing Lisp
- Phenomenal Data Mining: From Data to Phenomena
- Chess as the Drosophila of AI
- Elaboration Tolerance
- Elephant 2000: A Programming Language Based on Speech Acts
- Formalizing Context (Expanded Notes)
- The Inversion of Functions Defined by Turing Machines
- Review of "Artificial Intelligence: A General Survey"
- LISP 1.5 Programmer's Manual
- History of Lisp
- LISP I Programmer's Manual
- A Basis for a Mathematical Theory of Computation
- Recursive Functions of Symbolic Expressions Their Computation by Machine, Part I
- Some Philosophical Problems from the Standpoint of Artificial Intelligence
- Modality, Si! Modal Logic, No!
- Formalization of two Puzzles Involving Knowledge
- REVIEW OF THE EMPEROR'S NEW MIND by Roger Penrose
- Actions and other events in situation calculus
- Some expert systems need common sense
- A Tough Nut for Proof Procedures
- Towards a Mathematical Science of Computation
- What is artificial intelligence?
- Artificial Intelligence, Logic and Formalizing Common Sense
- The Mutilated Checkerboard in Set Theory
- Coloring Maps And The Kowalski Doctrine
- Useful Counterfactuals
- Creative solutions to problems
- Free Will - Even for Robots
- Simple Deterministic Free Will
- The Little Thoughts of Thinking Machines
- A Logical AI Approach to Context
- Combining Narratives
- Parameterizing Models of Propositional Calculus Formulas
- Philosophical and Scientific Presuppositions of Logical AI
- Roofs And Boxes
- Notes On Self-awareness
- Appearance And Reality
- John Searle's Chinese Room Argument
- Networks Considered Harmful - For Electronic Mail
- Universality: Or Why There Are Separate Sciences
- An Everywhere Continuous Nowhere Differential Function
- Todd Moody's Zombies
- The Philosophy of AI and the AI of Philosophy
- ALGOL 48 and ALGOL 50: ALGOLic Languages in Mathematics
- Approximate Objects And Approximate Theories
- AI Needs more Emphasis on Basic Research
- What is common sense
- Concepts of Logical AI
- In the Betan Embassy on Barrayar
- Making Computer Chess Scientific
- An Example for Natural Language Understanding and the AI Problems it Raises
- The Future Of Scientific Publication
- The Robot and the Baby
- Some Sitcalc Formulas for Robot Soccer
- Children's To Save Defends Alfalfa Sprouts
- Teller, Heisenberg And The Bomb
- Letter To Christian Physicists
- "Computer chess" and human chess
- Formalization of Strips in Situation Calculus
- Teleservice
- What AI Needs From Computational Linguistics
Publications:
- Computation: Finite and Infinite Machines
- The Turing Option
- Steps Toward Artificial Intelligence
- Future of AI Technology
- Why People Think Computers Can't
- Understanding Musical Activities: Readings In AI And Music
- Matter, Mind and Models
- Music, Mind, and Meaning
- Symbolic vs. Connectionist
- Alienable Rights
- A Framework for Representing Knowledge
- Progress Report on Artificial Intelligence
- Telepresence
- Virtual Molecular Reality
- Afterword to Vernor Vinge's novel, "True Names"
- Form and Content in Computer Science
- Memoir on Inventing the Confocal Scanning Microscope
- Negative Expertise
- Jokes and their Relation to the Cognitive Unconscious
- Introduction to LogoWorks
- Will Robots Inherit the Earth?
- Interior Grounding, Reflection, and Self-Consciousness
- Communication with Alien Intelligence
- An Interview With Marvin L. Minsky
- Perceptions: An Introduction to Computational Geometry
- The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind
- The Society of Mind
- BBC-3 Music Interview 2004
Publications:
- On the analysis of human problem solving protocols
- Human problem solving: The state of the theory in 1970
- Computer Structures: Readings and Examples
- The Processes of Creative Thinking
- IPL- V Programmer's Reference Manual
- The Psychology of Human-Computer Interaction
- Unified Theories of Cognition
- Universal Subgoaling and Chunking: The Automatic Generation and Learning of Goal Hierarchies
- Reasoning, problem solving and decision processes: the problem space as a fundamental category
- Chess-playing Programs And The Problem Of Complexity
- Computer Science as Empirical Inquiry: Symbols and Search
- SOAR: An Architecture for General Intelligence
- The Keystroke-Level Model for User Performance Time with Interactive Systems
- You can't play 20 questions with nature and win: projective comments on the papers of this symposium
- The Knowledge Level
- GPS, A Program that Simulates Human Thought
- Programming the Logic Theory Machine
- A general problem-solving program for a computer
- Heuristic programming: ill-structured problems
- Computer Text-Editing: An Information-Processing Analysis of a Routine Cognitive Skill
- Elements of a Theory of Human Problem Solving
- Mechanisms of skill acquisition and the law of practice
- Heuristic Problem Solving: The Next Advance in Operations Research
- Physical symbol systems
- The ZOG approach to man-machine communication
- Formulating the Problem-Space Computational Model
- Models: Their Uses And Limitations
- What is Computer Science?
- Harpy, production systems and human cognition
- Intellectual issues in the history of artificial intelligence
- The Search for Generality
- A preliminary analysis of the Soar architecture as a basis for general intelligence
- How can Merlin understand?
- Initial Assessment of Architectures for Production Systems
- The chunking of goal hierarchies: a generalized model of practice
- R1-Soar: An Experiment in Knowledge-Intensive Programming in a Problem-Solving Architecture
- Information Processing Language V Manual
- Skill in Chess
- Rational Choice and the Structure of the Environment
- Altruism and Economics
- Perception in Chess
- Verbal Reports as Data
- A Behavioral Model of Rational Choice
- On a Class of Skew Distribution Functions
- The Architecture of Complexity
- Expert and Novice Performance in Solving Physics Problems
- On the Concept of Organizational Goal
- Models of man: social and rational
- How Big Is a Chunk?
- Making Management Decisions: the Role of Intuition and Emotion
- Theories of bounded rationality
- Why Are Some Problems Hard? Evidence from Tower of Hanoi
- Why a Diagram is (Sometimes) Worth Ten Thousand Words
- Models of My Life
- Organizations and Markets
- What Is An "Explanation" Of Behavior?
- A Formal Theory of the Employment Relationship
- The Sciences of the Artificial
- Motivational and emotional controls of cognition
- Theories of Decision-Making in Economics and Behavioral Science
- Administrative Behavior: How organizations can be understood in terms of decision processes
- Models of Competence in Solving Physics Problems
Publications:
- Expert Systems: Principles and Practice
- An Interview with EDWARD FEIGENBAUM
- The Handbook of Artificial Intelligence Volume III
- Computers and Thought
- Signal-to-Symbol Transformation: HASP/SIAP Case Study
- Soviet cybernetics and computer sciences, 1960
- An Information Processing Theory of Verbal Learning
- A Theory of the Serial Position Effect
- The Handbook of Artificial Intelligence Volume I
- The Handbook of Artificial Intelligence Volume II
Publications:
- OM: "One Tool for Many (Indian) Languages"
- The Hearsay Speech Understanding System: An Example of the Recognition Process
- Transmembrane helix prediction using amino acid property features and latent semantic analysis
- Digital Information Organization in Japan
- Collaborative Research: ITR/ANIR: 100 Mb/sec For 100 Million Households
- Three Open Problems in AI
- A 3D-Object Reconstruction System Integrating Range-Image Processing and Rapid Prototyping
- Foundations and Grand Challenges of Artificial Intelligence
- Multiplicative Speedup of Systems
- Characterization of protein secondary structure
- Minimizing Computational Cost for Dynamic Programming Algorithms
- Comparative n-gram analysis of whole-genome protein sequences
- Computational Biology and Language
- An Integral Approach to Free-Form Object Modeling
- Knowledge Guided Learning of Structural Descriptions
- Spoken-Language Research at Carnegie Mellon
- The Digital Library of India Project: Process, Policies and Architecture
- Robotics and Intelligent Systems in Support of Society
- Improving Recognition Accuracy on CVSD Speech under Mismatched Conditions
- Interviews: The Challenges of Emerging Economies
- Computer vision: the challenge of imperfect inputs
- An overview of the SPHINX speech recognition system
- Techniques for the Creation and Exploration of Digital Video Libraries
- A Historical Perspective of Speech Recognition
- PCtvt: a Multifunction Information Appliance for Illiterate People
- Improving Pronunciation Inference using N-Best List, Acoustics and Orthography
- To Dream The Possible Dream
- A. M. Turing Award Oral History Interview with Raj Reddy
- Principal Component Analysis with Missing Data and Its Application to Polyhedral Object Modeling
Publications:
- Teaching Children to be Mathematicians Versus Teaching About Mathematics
- Mindstorms: children, computers, and powerful ideas
- Computer Criticism vs. Technocentric Thinking
- One AI or Many?
- Microworlds: transforming education
- Perceptrons: An Introduction to Computational Geometry
- The Children's Machine: Rethinking School In The Age Of The Computer
- The Connected Family: Bridging the Digital Generation Gap
Publications:
- Connectivity of random nets
- Inductive inference research status, spring 1967
- The Application of Algorithmic Probability to Problems in Artificial Intelligence
- A Formal Theory of Inductive Inference, Part I
- A Formal Theory of Inductive Inference, Part II
- A Progress Report on Machines to Learn to Translate Languages and Retrieve Information
- The Adequacy of Complexity Models of Induction
- An Inductive Inference Machine
- The Discovery of Algorithmic Probability
- A Coding Method for Inductive Inference
- Two Kinds of Probabilistic Induction
- Machine Learning − Past and Future
- An Exact Method for the Computation of the Connectivity of Random Nets
- Effect of Heisenberg's Principal on Channel Capacity
- A System for Incremental Learning Based on Algorithmic Probability
- Lecture 1: Algorithmic Probability
- Lecture 2: Applications of Algorithmic Probability
- Does Algorithmic Probability Solve the Problem of Induction?
- The Universal Distribution and Machine Learning
- Progress Report: Research in Inductive Inference For the Year Ending 31 March 1959
- A New Method for Discovering the Grammars of Phrase Structure Languages
- Progress In Incremental Machine Learning
- Optimum Sequential Search
- Perfect Training Sequences and the Costs of Corruption - A Progress Report on Inductive Inference Research
- The Probability of "Undefined" (Non-converging) Output in Generating the Universal Probability Distribution
- Progress Report: Research in Inductive Inference April 1959 to November 1960
- A Preliminary Report on a General Theory of Inductive Inference
- The Search for Artificial Intelligence
- Complexity-Based Induction Systems: Comparisons and Convergence Theorems
- Some Recent Work in Artificial Intelligence
- Structure of Random Nets
- Inductive Inference Theory - A Unified Approach to Problems in Pattern Recognition and Artificial Intelligence
- The Mechanization of Linguistic Learning
- The Time Scale of Artificial Intelligence: Reflections on Social Effects
- Training Sequences for Mechanized Induction
- Comments on Dr. S. Watanabe's Paper
- Autonomous Theory Building Systems
- Algorithmic Probability, Heuristic Programming and AGI
Publications:
- Speechstuff and Thoughtstuff: Musings on the Resonances Created by Words and Phrases via the Subliminal Perception of their Buried Parts
- A Non-deterministic Approach To Analogy, Involving The Ising Model Of Ferromagnetism
- Alan Turing: The Enigma
- Godel, Escher, Bach: An Eternal Golden Braid
- Gödel's Proof
- To Err is Human - To Study Error-making is Cognitive Science
- I Am a Strange Loop
- Metamagical Themas: Questing For The Ess