Did you know?

Artificial Intelligence (AI)

Artificial Intelligence (AI)

What is artificial intelligence?

Artificial intelligence is a branch of technology that allows the design and programming of hardware systems, which can be computers or computer-controlled robots, but also the development of systems equipped with certain characteristics that are considered typically human, such as: visual perceptions, temporal space and decisional, the ability to reason, discover meaning, generalize, or learn from past experience.

Not only that artificial intelligence is understood as the ability to calculate or know data, but also includes all the different forms of intelligence recognized by Gardner's theory. The 8 fundamental dimensions of the concept of multiple intelligence, according to Gardner's theory, are: verbal/linguistic, logical-mathematical, spatial, kinesthetic, musical, intrapersonal, interpersonal and naturalistic.

An intelligent system is generated by trying to recreate one or more of these different forms of intelligence which, although often are defined as specifically human, can actually be identified by certain technologies in reproducible behaviors (patterns).

In conclusion, we can say that artificial intelligence represents the ability of systems or machines to imitate human intelligence as faithfully as possible, to solve problems and to achieve goals. Artificial Intelligence systems adapt, analyze data and predict future actions based on existing information.

How did artificial intelligence come about?

lan Turing - English computer scientist, cryptographer, mathematician and researcher, is considered the father of AI due to the so-called Turing test of machine intelligence, introduced in 1950. He measured the intelligence of machines by the human ability to distinguish between man and machine. The researcher said that "if man could not distinguish between the responses of a machine and those of a human, then the machine could be considered intelligent"

Artificial intelligence was first discussed in 1956 at a summer conference at Dartmouth College in America. Here was introduced some programs capable of doing logical reasoning, especially related to mathematics. Logician Theorist program, developed by two computer scientists, Allen Newell and Herbert Simon, was able to prove some mathematical theorems starting from certain information. Thus, Allen Newell, Herbert Simon, John MCCarthy, Marvin Minsky and Arthur Samuel, all present at that historic conference, became the founders and leaders of AI research. They and their students produced programs that the press of the time described as "amazing". AI's founders were optimistic about the future: Herbert Simon predicted: "within twenty years machines will be able to do anything a human can do." Marvin Minsky agreed, writing: "within a generation... the problem of creating 'artificial intelligence' will be substantially solved."

Also in 1956, artificial intelligence was founded as an academic discipline.

However, in the second half of the 1960s, it became more than obvious that what had been achieved so far in the field of Artificial Intelligence was no longer sufficient for humanity's needs, which were beyond of making machines and programs capable to solve mathematical theorems, more or less complex.

The new trend created was to look for solutions to problems closer to human reality.

 

Artificial Intelligence: Consciousness, Cognition and Problem Solving

Depending on how it reacts to the external environment, we can distinguish two types of artificial intelligence:

      1. Narrow AI

The learning algorithm is designed to produce a single task without human assistance. Here are some characteristics of this type of AI:

  • It boils down to the domain for which it was created by the programmers
  • It learns slowly, from thousands of similar examples
  • Knowledge cannot be transferred from one field to another

This is the artificial intelligence that is used today in most fields of activity. For example, translation and image recognition applications, as well as recommendation applications, use limited AI. Also, the virtual assistants Siri, Alexa and Bixby are equipped with limited artificial intelligence.

  1. General AI

It mimics complex human thought processes. The features of this type of artificial intelligence are:

  • It acts similar to a human being
  • It learns by itself and reasons with the help of it’s operating system
  • Learn quickly from just a few examples and unstructured data
  • It has the full range of human cognitive capabilities
  • The accumulated knowledge and tasks can be transferred from one field to another

General AI doesn't exist right now and could be the AI ​​of the future.

How do computers learn?

One of the main steps in the history of artificial intelligence was made when specific algorithms could be recreated, capable of improving the behavior of computers (meaning the ability to act and make decisions) which it can thus learn through experience just how people do.

Developing algorithms that can learn from their mistakes is essential to creating intelligent systems that work in contexts for which programmers cannot foresee all possible developments.

Through machine learning, a computer is able to learn and perform a certain action even if that action has never been programmed.

The complexity of machine learning can be of three types, depending on the learning requests:

  1. Machine Learning

Computers receive a huge amount of information that they analyze and react to without the need for human assistance or specific programming. Here we can include virtual assistants that understand voice commands.

  1. Deep Learning

Computers learn from experience which translates into text, sound and image. Autonomous machines that can distinguish between humans and things fall into this category.

  1. Neural networks

Like biological neural networks (existing in the human brain), artificial neural networks have the characteristic of being adaptive, knowing how to modify their structure by adapting them to the specific needs derived from the different information obtained in different phases of learning. Thus, they exchange data and can carry out complex tasks, trying to imitate human thinking. This is the type of learning considered the future in AI.

 

Artificial intelligence, part of our everyday life

Although, as you have read already, artificial intelligence has been talked about since the 50s, it has seen unprecedented development in the last few years. The increase of power computers and the accumulation of huge amounts of data have enabled the fastest progress of machine learning. Now artificial intelligence has multiple applications that help us do common things. Most of us use artificial intelligence, on a daily basis,on various levels of our lives:

  • Internet search

When we search for information on the Internet, search engines learn from the data entered by users in order to provide as many relevant results as possible.

  • Online shopping and advertising

Artificial intelligence is widely used to provide personalized recommendations for the users of shopping platforms, based on their previous searches or purchases.

  • Improving the customer shopping experience

With the help of artificial intelligence, retailers can build strategies adapted to the needs of their customers and thus strengthen their business position.

  • Automatic translation

Translation software, whether written or spoken, uses artificial intelligence to translate, but also to constantly improve. It also facilitates automatic subtitling.

  • Gaming

This is the ideal environment for the development of artificial intelligence. All games have an advanced AI algorithm behind them. This allows you to have a smooth game and above all, to develop a number of strategies.

  • Cyber ​​security

AI systems help detect and combat cyber attacks and even other cyber threats that rely on continuous data input, recognizing attacks.

  • Medical services

In healthcare services artificial intelligence is used to analyze complex medical data and can prove the patient's risk of infections by using advanced technology. We also can see that there are innovations that eliminate or modify older treatments.

  • Industrial robots

Artificial intelligence and machine learning capabilities have quickly made their way into industrial robotics technology. Out of a desire to improve productivity, people in manufacturing are constantly looking to improve the rigid and inflexible capabilities of standard industrial robots. Early adopters of the fusion of robotics and AI technology are reaping the benefits. The technology, although relatively new, is widely available and has an enormous positive impact on manufacturing processes.

  • Smart infrastructure

We have been noticing the global trend towards sustainable living in recent years, and smart city developers around the world are offering more sustainable alternatives every day. Thus, smart thermostats are adapted to the behavior of the modern consumer, with a focus on saving energy and resources in general.

 AI is here to stay and grow!

 Article published by Daniela Popa