Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks as discovering proofs for mathematical theorems or playing chess with great proficiency. From this time, artificial intelligence (AI) start to spread in our life.
What is Artificial Intelligence (AI)
Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The ideal characteristic of (AI), is its ability to rationalize and take actions. It is applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.
Despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. But, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.
As technology advances, previous benchmarks that defined artificial intelligence become outdated. For example, machines that calculate basic functions or recognize text through methods such as optical character recognition are no longer said to have artificial intelligence, since this function is now taken for granted as an inherent computer function.
AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data.
Artificial intelligence technologies
Automates analytical model building. It uses methods from neural networks, statistics, operations research to find hidden insights in data without explicitly being programmed for where to look.
Is a type of machine learning that is made up of interconnected units (like neurons), that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.
It uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.
It is a subfield of AI that strives for a natural, human-like interaction with machines. Using AI and cognitive computing, the ultimate goal is for a machine to simulate human processes through the ability to interpret images and speech – and then speak coherently in response.
It relies on pattern recognition and deep learning to recognize what’s in a picture or video. When machines can process, analyze and understand images, they can capture images or videos in real time and interpret their surroundings.
Natural language processing (NLP)
Is the ability of computers to analyze, understand and generate human language, including speech. The next stage of NLP is natural language interaction, which allows humans to communicate with computers using normal, everyday language to perform tasks.
Graphical processing units
Are key to AI because they provide the heavy compute power that’s required for iterative processing. In fact, Training neural networks require big data plus compute power. The Internet of Things generates massive amounts of data from connected devices, most of it unanalyzed. Automating models with AI will allow us to use more of it.