Home > Posts > Artificial Intelligence > Ethical Issues Of Artificial Intelligence And Robotics

Ethical Issues Of Artificial Intelligence And Robotics

Artificial Intelligence is one of the most essential topics that IT industry is hovering around these days. The necessity and importance of Artificial Intelligence are great because it has the ability to help solve extremely difficult issues in different industries, such as education, health, commerce, transport, and utilities. Let’s through this article tackle Ethical Issues Of Artificial Intelligence And Robotics.

Summary of Ethical Issues Of Artificial Intelligence

Everyone should think about the ethics of the work they do, and the work they choose not to do. Artificial Intelligence and robots often seem like fun science fiction, but in fact, already affect our daily lives. For example, services like Google and Amazon help us find what we want by using AI. They learn both from us and about us when we use them. The USA and some other countries and organizations now employ robots in warfare.

Ethical Issues Of Artificial Intelligence And Robotics

Humanity

Artificially intelligent bots are becoming better and better at modeling human conversation and relationships. In 2015, a bot named Eugene Goostman won the Turing Challenge for the first time. Eugene Goostman fooled more than half of the human raters into thinking they had been talking to a human being. This milestone is only the start of an age where we will frequently interact with machines as if they are humans.

Cybersecurity

Cybersecurity is one of the biggest concerns of governments and companies, especially banks. A robbery of $1 billion was reported in banks from Russia, Europe, and China in 2015 and half a billion was stolen from the cryptocurrency exchange Coincheck. Artificial intelligence can help protect against these vulnerabilities, but it can be also used by hackers to find new sophisticated ways of attacking institutions.

Transparency

A related challenge is transparency of function. It should be clear to users, in general terms, what a RAS system does and what it is unable to do, why it is carrying out a task at a particular place or time, what data it is collecting for that purpose and whether this is being shared. The system needs to be able to give an account of itself in everyday terms that users understand and to respond promptly to requests to change behavior.

Racist robots

Though artificial intelligence is capable of speed and capacity of processing that’s far beyond that of humans, it cannot always be trusted to be fair and neutral. Google and its parent company Alphabet are one of the leaders when it comes to artificial intelligence, where AI is used to identify people, objects and scenes. But it can go wrong, such as when a camera missed the mark on racial sensitivity, or when software used to predict future criminals showed bias against black people. AI systems are created by humans, who can be biased and judgemental.

Social Attachment

Over time, people may form emotional bonds with RAS technologies, as they already do with devices such as mobile phones, tablets, and cars. However, there may be a particularly strong tendency to develop ties with animate systems that have a social function such as companion robots. Careful consideration needs to be given to the design of these systems to ensure that the relationships people form with these technologies do not interfere with other aspects of their lives.