We can all agree that concerns about disruptive Artificial intelligence and the possibility that it may someday take over people’s work and personal life have arisen. Because fact-checkers cannot forecast the future, let’s ponder that idea for a bit. What we can do is clarify what it means for the generation and consumption of information in general.
In the digital age, misinformation and its implications are becoming widespread issues, especially on social media platforms.
The rapid rise of social media and improvements in artificial intelligence (AI) has expedited the spread of false information, with potential social, economic, and political ramifications. Anyone and everyone has the power of open-source AI to create whatever they’d like.
We hosted an online webinar with several professionals in the communication-education sector and a journalist to address the difficulties of navigating the world of disinformation in this day and age, as well as the role of AI in preventing or furthering this issue.
Daniel Mutembesa is from the Makerere University AI Research Lab, which has been doing its research since 2010 and uses AI to enlighten the world of, among other industries, agriculture, health, and transportation. He made hints throughout this conversation on the use of AI to replicate human-performed tasks that require computational abilities, particularly those that we take for granted or are most likely to repeat in the near future.
“However, what misinformation does is, undermine the potential of public trust, distorts the narrative and jeopardises the integrity of the information ecosystem and the ecosystem is where democracy thrives……in the area of misinformation AI has become a tool that exacerbates this vice…,” Daniel said.
However, he notes that every situation has a flip side. While one side magnifies the problem of disinformation, the other considers answers and works to put out the flames. But my concern is whether these two scales are in balance. Or one weighs more than the other. What role do you play in this equation?
Kunle Adebajo, a Nigerian investigative journalist and fact-checker for HumAngle, claims that AI is a double-edged weapon that has been employed to some level by some people. To exemplify this, he speaks of Cod4Africa, an organisation working in civic tech and open data labs that is using machine learning and artificial intelligence to identify trends in abusive language and violence, Code4Africa recently released a programme that assists female journalists and politicians in protecting themselves from online harassment.
Deep fakes and huge language models, which have transformed the field for fact-checkers, are the game-changer, according to Kunle. Bots have been used by extremist networks, making it simpler for organisations like these to disseminate false material with little effort. People readily accept certain false information based on personal prejudices.
When someone sees an image or video that contains leaks of AI, there may be warning signs they may personally rule out. However, there are no tools that show for sure that certain content was generated by an AI program which sharpens the debate on information overload and how easily people are able to point out fake information.
Dr Sarah Namusoga, a lecturer at Makerere University, Department of Journalism and Communication, raised some ethical issues with the use of AI in spreading misleading information. Since AI learns from the information it is given, biased information will cause it to be biased when sharing information based on the knowledge given.
She also defined the term “eco-chambers,” which refers to a setting where individuals are exposed to information that confirms their preconceived notions or biases. The trouble with this is that various individuals impose various agendas on other civilizations or individuals.
“AI, as we all know from our use of social media, AI algorithms will take note of the information you are accessing and then tailor information around that and as a result, you will end up in an information eco-chamber and spread false information if not careful,” said Sarah.
As journalists and fact-checkers, it is clear from this dialogue that we must establish a solid ethnic base. Some suggestions revolved around the use of reliable data to prevent errors and bias in AI. We should be careful not to go crazy with AI exhibiting some traits of human nature because if we do, we might neglect to first double-check the data these technologies are giving us.
In addition, Kunle Adebajo suggests that we should examine AI-generated information just as closely as we would any other human person. To create a balance with AI, researchers, journalists, creators, and communication experts should collaborate. On the other hand, children and other people should be grounded in media and information literacy so they can make informed decisions at all stages of education.
Bella Twine – Managing Editor, Debunk Media Initiative