Artificial Intelligence and Machine Learning Explained
AI • Jul 03,2016
Artificial Intelligence and Machine Learning Explained
Artificial intelligence seems to have become ubiquitous in the technology industry. Artificial intelligence and machine learning are now an integral part of the digital landscape but we are still frequently asked how to distinguish these two terms. This task is made difficult by the fact that there is not an agreed vocabulary; everybody uses the above terms differently. In addition, the commonly understood meaning of some of these terms has evolved over time. What was meant by AI in 1960 is very different than what is meant today.
Many professionals still google, ask specialised forums and Quora experts what is a difference between AI and machine learning? Follow our simple guide to understanding these commonly used terms.
Artificial Intelligence
AI is one of the newest disciplines. It was formally initiated in 1956, when the name was coined, although at that point work had been under way for about five years. There are four goals to pursue in artificial intelligence: systems that think like humans, systems that think rationally, systems that act like humans and systems that act rationally. Let look at each in more detail.
Acting humanly
The Turing Test, proposed by Alan Turing (Turing, 1950), was designed to provide a satisfactory operational definition of intelligence. Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator. Turing’s test deliberately avoided direct physical interaction between the interrogator and the computer, because a physical simulation of a person is unnecessary for intelligence. Within AI, there has not been a big effort to try to pass the Turing test. The issue of acting like a human comes up primarily when AI programs have to interact with people, as when an expert system explains how it came to its diagnosis, or a natural language processing system has a dialogue with a user.
Thinking humanly
In order to say that a given program thinks like a human, we need to understand how humans think. If the program’s input/output and timing behavior matches human behavior, that is evidence that some of the program’s mechanisms may also be operating in humans. For example, Newell and Simon, who developed GPS (General Problem Solver), were not content to have their program correctly solve problems. They were more concerned with comparing the trace of its reasoning steps to traces of human subjects solving the same problems. Also, cognitive science brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind.
Thinking rationally
The Greek philosopher Aristotle was one of the first to attempt to codify ‘right thinking,’ that is irrefutable reasoning processes. By 1965, programs existed that could, given enough time and memory, take a description of a problem in logical notation and find the solution to the problem, if one exists. The so-called logicist tradition within artificial intelligence hopes to build on such programs to create intelligent systems.
Acting rationally
Acting rationally means acting so as to achieve one’s goals, given one’s beliefs. An agent is just something that perceives and acts. In this approach, AI is viewed as the study and construction of rational agents. The study of AI as rational agent design, therefore, has two advantages. First, it is more general than the ‘laws of thought’ approach, because correct inference is only a useful mechanism for achieving rationality and not a necessary one. Second, it is more amenable to scientific development than approaches based on human behavior or human thought, because the standard of rationality is clearly defined and completely general.
Machine learning
Early researchers explored the idea of neuron models for artificial intelligence. The resulting technology, artificial neural networks (ANNs), was created over 50 years ago when very little was known about how real neurons worked. the emphasis of ANNs moved from biological realism to the desire to learn from data without human supervision. Consequently, the big advantage of Simple Neural Networks over classic AI is that they learn from data and don’t require an expert to provide rules. Today ANNs are part of a broader category called “machine learning” which includes other mathematical and statistical techniques.
Machine Learning is a subset of artificial intelligence and it explores the development of algorithms that learn from given data. These algorithms should be able to learn from past experience (i.e. the given data) and teach themselves to adapt to new circumstances and perform certain tasks.
Literally, with machine learning, you show computer how to do certain things. For example, you want a computer to know how to cross a road, says Ernest Davis, a professor of computer science at New York University. With conventional programming, you would give it a very precise set of rules, telling it how to look left and right, wait for cars, use pedestrian crossings, etc., and then let it go. With machine learning, you’d instead show it 10,000 videos of someone crossing the road safely (and 10,000 videos of someone getting hit by a car), and then let it do its thing.
The tricky thing is to force a computer to absorb information from these videos. Over the last decades, scientists and programmers used different methods of teaching computers. For example, a method of reinforcement learning, where you give computer reward, optimising the best solution and genetic algorithms, was very popular.
There’s one teaching method that’s become particularly useful and popular: deep learning. We will look at this type of machine learning, that uses lots of layers in a neural network to analyze data at different abstractions, in our next blog post.
Artificial intelligence is changing. We are now recognising that most things called ‘AI’ in the past are nothing more than advanced programming tricks. Machine learning techniques are the future of AI but not necessarily using the methods popular today. It’s also a good way to recognise previously unrecognisable patterns, which means that this type of pattern recognition is a part in series of breakthroughs in solving part of AI problem.
Author: AI.Business
If you like our articles, please subscribe to our monthly newsletter:
[mc4wp_form id=”763″]