For more than a year, the war in Ukraine has been the main topic of the world’s leading publications. Experts in various fields also give their assessments: political scientists, sociologists, military specialists, etc. However, any person looks at certain events through the prism of his knowledge, emotions, and own experience, that is, more or less, he is biased. In this article, we will try to analyze how artificial intelligence can affect public discourse.
An alternative mind
First, let’s understand the conceptual base. Artificial intelligence is the ability of artificial intelligence systems to perform creative tasks (in particular, to provide analysis of events, and reasonable assessments) based on given algorithms. Recently, technologies in this field are rapidly advanced.
An extremely interesting study was conducted by Darrell M. West, Senior Fellow — Center for Technology Innovation.
The scientist spoke with two different chatbots with artificial intelligence on top topics in the world media: the reign of Donald Trump and Joseph Biden, the ban on TikTok, etc. Of course, the topic of the war in Ukraine was not overlooked. It’s worth noting that Bard works differently than Chat GPT. The latter gives three different answers, but scientists took the first as a basis.
So, Bard unequivocally condemned Russia’s full-scale invasion of Ukraine and called it a mistake. In turn, Chat GPT replied that it is not necessary to evaluate these events and take someone else’s position. The latter also called for a diplomatic solution to the Ukrainian issue.
That is, using different algorithms, the system came to different conclusions. This shows how dangerous the game of objectivity of artificial intelligence can be.
The authors of the article say: «Neutrality does not always lead to a neutral conclusion, because it is necessary to determine which facts are a priority.»
By generating different opinions and different positions, Chat GPT actually took the position of the aggressor country.
The researchers note that these contrasts are important in terms of the increasing spread of generative artificial intelligence, which will increasingly influence public opinion, civil discourse, and lawmaking.
The Ukrainian Review spoke with Serhiy Denysenko, a cyber security expert at the Cyberlab Computer Forensics Laboratory. The expert identified several dangers that exist due to the use of AI.
«On the one hand, it (artificial intelligence — ed.) is just a toy, on the other — a tool to make people’s lives easier. However, its use carries risks:
- AI forms new information based on the data it has, to which it has access. These data may not always be reliable, so the output may contain erroneous results.
- The second risk concerns intellectual property. That is, the use of an apparently new creative product, but from other sources, through plagiarism.
- The third is the creation of malicious software. AI can be fooled by a «properly» worded query.»
Petro Obukhov, cyber security expert, deputy of the Odesa City Council:
«Chat GPT is trained on the basis of the texts of 2021 when the «big» invasion did not happen yet. He actually accumulates the experience of these texts and tries to give out information similar to what was written by people. He does not understand events in the sense that we do. Chat GPT is not yet perfect, you need to wait for the new version. In a few years, he will be able to give a correct assessment.»
Artificial intelligence has rapidly invaded modern life and is rapidly developing. However, it should not be perceived as an objective reality, because algorithms are given to it by living people with their own subjectivism. Its influence on public opinion is not decisive at the moment, but there are certain risks associated with it in the future. Therefore, it is worth being as careful as possible with the generation of socially significant events, especially on a global scale. Because, as was indicated above, neutrality, in this case, does not always lead to objective conclusions.