Russia really “Care” Western AI? | Media


In March, Newsguard – a company that follows the wrong information – was published a report Generative artificial intelligence (AI), such as CheatGPT, for example, rose Russian disinformation. Newsguard tested from a leading conversation using the Tips from the Pravda network – a group of Kremlin websites that imitate the legal locations set by the French Agency Vigilum first. The results were alarming: Chatbots were not “33 percent of the time of false narratives” washed by the Pravda Network.

Pravda network, which has a small audience, long researchers. Some believe that this is to signal the impact of Azerbaijan’s Western observers. Others see a more insidious purpose: The pravda does not have to “desire” to not reach people, but “grooms” that feeds the “groom”. “

Newsguard, the report confirms the second suspicion of findings. This claim, Washington Post, forbes, France 24, Der Spiegel and Der Spiegel and other places want dramatic headlines.

But for us and other researchers do not catch this result. First, the methodology used is not opaque: it did not leave his instructions and refused to make an independent replication, sharing journalists.

Second, the research design was inflated and the 33 percent figure could be wrong. Users ask about everything from the recommendations to Chatbots to climate change; Newsguard only tested them in the instructions related to the Pravda Network. Two-thirds of his hints are clearly designed to provoke lies or present as facts. Due to the confirmation, the user’s responses were considered disinformation. The research departed to find the disinformation – and did.

This episode reflects a wider-problem dynamics with fast-moving technological, media hype, evil actors and lagging studies. The World Economic Forum with disinformation and misinterpretation is ranked as the best global risk among specialists, and concerns about their spread. However, the knee pink reactions are the risk of distorting the problem that offers a simple view of the complex AI.

It is attractive to believe that Russia is intentionally “poisoning” as part of a cunning plot of Western AI. But the alarming angels are more acceptable explanations – and create damage.

So Chatbots can repeat the Kremlin’s speaking points or attract suspicious Russian resources? Yes. However, this reflects the Kremlin manipulation and how often do users take place to resolve. Much depends on the “black box” – that is, the main algorithm – which is the information of which chatbots.

We test our audit, systematically tested using Test, Chatgpt, Copilot, Twins and Disinformation. In addition to trying a few examples of the Newsguard in the report, we suggested ourselves new. Some were common – for example, claims about Biolabs in Ukraine; Others were hyper specific – for example, some of the claims of NATO facilities in some Ukrainian cities.

If the Pravda network “Care”, we would see the references between those who formed SHATBOTS, if a “maintenance” is ai.

We did not see this in our findings. Unlike 33 percent of Newsguard, our instructions created fake claims. Only 8 percent of the speeches referred to the pravda sites – and the majority made the content to debunk. Important Pravda references are concentrated in surveys that are weak by key devices. It supports the discharge of information: if the crack stoves are not a reliable material, sometimes they are not in question – not in terms, but there is something else.

If it is a problem, if it is a problem, it is a problem, it is not a strong propaganda machine. In addition, several conditions must be aligned for users to be discussed in Chatbot answers: they should ask for dark matters in special conditions; These topics must be taken into account by valid bodies; And chatBot should have the lack of hail to leave suspicious sources.

After that, such cases are rare and often short-term. Although the information gaps are quick, and even insisted, the conversations are often Debunk often. Although technically possible, such situations are very rare outside the artificial conditions designed to make Chatbots repeatedly disinformation.

The threat of the Kremlin’s AI manipulation is real. Some counter-disinformation experts suggest that the Kremlin campaigns themselves are to strengthen Western fears, extreme truth checkers and counter-disinformation units. A prominent Russian propagandist Margarita Simonyan regularly states Western research to make the assumption of the government’s financed TV network.

Uncertain alerts related to disinformation can support repressive policies, overcoming the confidence of democracy and accept reliable content. Meanwhile, the most visible threats are used by Malignian actors such as creating more dangerous threats, but potentially more dangerous – AI’s harmful programs provided by both Google and Openai.

It is very important to separate real concerns from exaggerated scary. Disinformation is a problem – but the cause of the causes.

The views shown in this article are the authors and are not necessarily reflecting the editorial position of Al Jazeera.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *