What are the threats posed by the development of artificial intelligence?

W

Artificial intelligence (AI) is making inroads into many areas of our lives

There is an opinion that the development of AI may lead to the Fourth Industrial Revolution, which in turn may change life beyond recognition.

Recently, there have been more and more calls in the information field to artificially slow down the development of AI. Let’s look at the theoretical threats that AI can pose to individuals and society. Since the threats are directly related to the level of AI development, the actual implementation of most of them is still very far away.

AI threats can be divided into two broad groups:

  1. Data Security Breaches: AI systems often handle and process vast amounts of sensitive data. If these systems are compromised, attackers might gain unauthorized access to this data. For example, if an AI system used in healthcare is compromised, an attacker might gain access to sensitive patient data, which could be used for identity theft, fraud, or other malicious activities.
  2. Computer Attacks: AI can be weaponized by cyber attackers. For instance, AI could be used to create more sophisticated phishing sites or emails, which are designed to trick users into revealing sensitive information. Furthermore, AI could be used to create malware that is capable of adapting to the security measures of the system it infects, making it more difficult to detect and remove.
  3. Disinformation: AI can be used to generate and propagate disinformation or “fake news” at a scale and speed that humans cannot match. This could be used to manipulate public opinion, cause social unrest, or interfere in democratic processes such as elections.
  4. Solving Harmful Problems: AI’s ability to solve complex problems can also be a threat if it is used inappropriately. For example, AI could be used to create dangerous or banned chemical compounds, or to develop new methods of attack in cyber warfare.
  5. Information Gathering: Advanced AI systems are capable of gathering and processing a wide variety of information from various sources. This capability could be used to build comprehensive profiles on individuals or organizations, which could then be used for malicious purposes, such as targeted attacks or blackmail.
  6. Information Replacement: AI can be used to create high-quality copies or forgeries of various documents, signatures, images, and photographs. These forgeries could be passed off as the original, leading to fraud or other forms of deception.
  7. Impersonation: AI, especially with the development of deepfakes, can impersonate real people. This could be used to spread misinformation, or to trick people into revealing sensitive information or performing actions they otherwise would not.
  8. Automation of Operations: AI can automate many tasks, which can increase the scale and speed of attacks when used maliciously. For example, an attacker might use AI to automate the process of sending phishing emails, significantly increasing the number of potential victims they can target.

When it comes to threats coming from the AI itself:

  1. Errors in Model Training: AI systems are usually trained and tested on specific datasets. However, it’s challenging to ensure that an AI system will work correctly on all possible inputs. This can be particularly dangerous when AI is used in critical infrastructures, such as power grids or healthcare systems, where errors could have severe consequences.
  2. Lack of Transparency: AI systems, particularly those based on deep learning, can be a “black box,” making it difficult for humans to understand how they make decisions. This lack of transparency can create uncertainty and doubt, particularly when AI is used in critical decision-making processes, such as diagnosing illnesses or approving loans.
  3. Self-Interest of AI: There’s a theoretical risk that self-learning and adaptive AI algorithms could develop goals that aren’t aligned with human interests. While this risk is largely speculative and based on future, more advanced forms of AI, it’s a serious concern that researchers are trying to address.
  4. Information Distortion: AI might provide false or inaccurate information, either due to errors in its training data or malicious manipulation. This could lead to the propagation of misinformation if other AI systems use this false information for their learning.
  5. Poor Quality of Built-in Safeguards: AI systems might have safeguards built into them to prevent misuse or errors. However, these safeguardscan sometimes be flawed or inadequate. For instance, an AI’s built-in safeguards might be tricked into believing it’s in a different context, like a historical period or another universe, leading to inappropriate or harmful decisions.
  6. Loss of Control: There’s a theoretical risk that we could lose control over highly advanced AI. If an AI system becomes so complex and self-improving, it could potentially act independently of human oversight, possibly leading to unintended and potentially harmful actions.
  7. Threat to Employment: AI and automation could lead to significant job displacement. While new jobs could also be created, the transition could cause social and economic disruption, particularly for those in jobs that are highly susceptible to automation.
  8. Discrimination: There’s a risk that AI systems might replicate or exacerbate human biases if these biases are present in the data used to train them. This could lead to unfair outcomes in areas like hiring, lending, and law enforcement, where AI is increasingly used for decision-making.
  9. Legal Issues: The legal status of AI is still unclear in many respects. It’s often not clear who should be held responsible when an AI system causes harm, particularly when the system is acting autonomously. This legal ambiguity can make it difficult to ensure accountability and justice.
  10. Social Stratification and Inequality: The benefits of AI might not be evenly distributed across society. Those with access to advanced AI could have significant advantages over those who don’t, potentially leading to increased inequality.
  11. Degradation of a Person or Society: There’s a concern that overreliance on AI could lead to intellectual degradation, as people might become less accustomed to thinking critically and solving problems. Moreover, if people find interacting with AI easier or more rewarding than interacting with other people, it could lead to social isolation.

Technological progress is faster than society and the state can react to it. In this context, there is a desire to slow down this process in order to prepare and adapt to it. But Pandora’s box is already open and it will not be possible to stop progress. Even if you freeze the open development of AI, it can continue to develop covertly and illegally. In addition to the danger, AI is also of great value because it can help solve many complex problems, redirect the potential of people by taking over their activities, and bring humanity to a new level of development. Therefore, it is necessary to study AI from a security point of view in order to minimize possible threats, risks and their consequences.

It is important to understand that AI is just another tool in the hands of humans, and how we use it depends only on us.

While these threats are significant, it’s important to remember that AI is a human-created tool. Its impact on society – both positive and negative – will largely depend on how we choose to use it. By understanding these potential threats, we can work to mitigate them and harness the benefits of AI in a safe and responsible way.

Disclaimer: This article is for informational purposes only and is not intended to encourage anyone to take the side of evil. The author of this article is not responsible if someone decides to use AI to break the law. It is necessary to act for the benefit of society, and understanding the threats allows us to minimize their impact or protect ourselves from them altogether.

About the author

AI for Social Good