In October 1984, a sci-fi film called The Terminator premiered in movie theaters across the United States and grossed $40 million, making it number one in the box office at the time. The premise of the movie? Artificial Intelligence. In 2029, a hostile computer program named Skynet is seeking to destroy humankind, but there’s one threat to this plan: the son of Sarah Connor who is predicted to destroy Skynet and save humans as we know them to be. So Skynet sends a cyborg named The Terminator back in time to find and kill Sarah Connor before her son can ever be born, thus securing the ultimate takeover of Skynet and the extinction of humans. (You’ll have to watch the movie to find out if The Terminator is successful or not!)
A lot has changed in the world of artificial intelligence since the release of The Terminator almost 40 years ago. In the first article of this mini series, we’re going to provide a brief overview of the actual history of AI; in the next article we’ll discuss how it’s being used currently, with a focus on the fintech sector; and in the final article we’ll make some educated predictions on how AI will realistically be used in fintech in the years ahead.
According to Wikipedia, AI, or artificial intelligence, is “intelligence demonstrated by computers, as opposed to human or animal intelligence.” McKinsey & Company’s April 2023 publication explains AI as “a machine’s ability to perform the cognitive functions we usually associate with human minds.” The word “artificial” in AI doesn’t mean the intelligence is fake; rather, it’s that it’s coming from a non-human source.
Artificial intelligence involves several other sub-fields including things like big data and machine learning (more on that in our next two articles).
AI has been around for a long time but in much simpler forms than the AI we’re talking about in 2023. Let’s look back at the last 75 years to see how AI started and has progressed to get us to where we are today.
Artificial Intelligence is not possible without a highly complex computing machine. So the true beginning of AI is directly tied to the invention of computers. If we trace our modern ideas of computers back in time, we land in the 1930s with a man named Alan Turing, who was a British mathematician, computer scientist, cryptanalyst, and much more. Towards the end of his life, in his 1950 paper titled Computing Machinery and Intelligence, Turing suggested that it was not unreasonable to believe that machines could learn how to use available information in order to solve problems and make decisions, just like humans do. Theories like Turing’s became the foundation for the development of AI in years to come.
In 1956, Turing’s ideas moved from theoretical to reality with a computer program called The Logic Theorist, developed by Herbert Simon and Allen Newell (and also Cliff Shaw who helped with the programming) to mimic the same mathematical problem-solving abilities as humans. The project was a success – The Logic Theorist was able to prove 38 out of the 52 theorems discussed in a well-known multi-volume text on advanced mathematics from the 1910s called Principia Mathematica.
The Logic Theorist was presented to the academic world later that year at the first ever Dartmouth Summer Research Project on Artificial Intelligence. It was at that conference that the term “artificial intelligence” was born and its existence as a field of research established.
For the next two decades, computers continued to become more capable and accessible, enabling the development of AI. In the late 1950s, Simon, Newell, and Shaw developed another computer program called General Problem Solver which was designed to solve nearly any problem instead of just one type of problem like previous programs (i.e. The Logic Theorist could only address mathematical equations). In the 1960s, computer scientist and MIT professor Joseph Weizenbaum developed a famous language processing computer program called ELIZA that through pattern matching and substitution techniques could, in a limited context, simulate human conversation. With these and other successes, the Defence Advanced Research Project Agency along with other U.S. government agencies started sponsoring AI initiatives at several institutions across the country. While many people had ambitious intentions for AI, however, computers were not developed enough to make many of these initiatives possible – yet.
The 1980s brought some significant advancements in computer capabilities, particularly in a technology called “deep learning,” that reignited interest in AI development. Deep learning involves specific techniques through which computers can learn from previous experiences and therefore develop greater capacities over time. At the same time deep learning was advancing, computer programs known as Expert Systems, which are designed to solve complex problems by reasoning through and applying bodies of knowledge to specific situations, were being developed . As more and more businesses started utilizing deep learning and Expert Systems programming, AI became a hot topic across many audiences (i.e. the stage was set for a Blockbuster film like The Terminator). Outside of Hollywood, actual AI continued to steadily expand.
The 1990s and 2000s brought us into more modern uses of AI. These decades included countless different applications of AI across different environments as it continued to grow. For example, for the first time computer programs were able to defeat humans at games of checkers and chess. Robots were successfully built for a diverse range of reasons, including everything from deep sea and outer space exploration, robotic sports competitions, house cleaning, and even children’s toys. Autonomous/semi-autonomous cars were built and successfully demonstrated. Dynamic Analysis and Replanning Tool (known as DART), an AI program used by the U.S. military to facilitate supply and personnel transportation as well as solve other logistical problems was first utilized during the Gulf War.
Since about 2010, the collection and application of data has catapulted AI even further forward. Some of the AI achievements of the last decade include the following: Oculus virtual reality headsets, Google Glass, Google DeepMind and Facebook DeepFace (facial recognition software), Google Assistant, robotic surgery technology. In the last fifteen years, tech companies have exploded in every category, utilizing AI capabilities to develop their products and services. The list could go on and on. (If you want to read more on this, check out the links here and here.)
If Alan Turing was alive today, would he recognize the world we’re in and the possibilities AI has opened for us? Without a doubt, there have been amazing developments and discoveries in the last 70+ years that have set the stage for us to keep moving forward. In our next article we’ll look at a few case studies of how AI is being applied within fintech and how these capabilities are making the world a better place. Stay tuned.
Get the latest on open banking, consumer credit, and financial data quality.