Menu

SDG Blog

Volume 27 | No.11 | November 2023
Image

The Urgent Case for Information Integrity

By Melissa Fleming, Under-Secretary-General of the United Nations Department of Global Communications

The case for information integrity has rarely been more compelling, or more urgent.

In all my years in communications, I can’t say I ever worked in such a troubled environment - an information ecosystem so polluted that voices for positive change are struggling to make themselves heard.

The potential impacts of this - on democracy, human rights and progress on the Sustainable Development Goals are devastating.

It wasn’t meant to be like this. When digital platforms first arrived, we communicators were so excited. For the UN, they held great potential to engage people directly with our advocacy and move them to act to improve the world.

And it’s true these tools have brought many benefits — revolutionizing communications for everyone, everywhere, connecting those crying out for change, bringing together the isolated, and reuniting the displaced.

But we’ve also seen a darker side. Digital platforms have enabled the massive proliferation of lies and hate on an industrial scale, enabling malicious actors to pump lies and hate into our public sphere, day in, day out, over many years.

We’ve all seen them: Snake oil salesmen persuading people to refuse life-saving vaccinations or cancer treatments. Fossil fuel companies undermining climate action for profit. Malicious actors stirring up old fears and hatreds for nefarious and violent ends.

Now, the fog of war is driving the spread of hate and lies online — resulting in dangerous errors with real-time, real-world consequences. Just as in the early days of Russia’s invasion of Ukraine, demand for information is sky-high. Minute by minute, we’re glued to social media, checking for updates on the violence in Gaza and Israel. Horrified and anxious, we can’t look away.

Related hate speech, mis- and disinformation — already rampant — is flooding social media feeds, warping perceptions, and risking further violence. In this context especially, hate lands on fertile ground.

These voices aren’t new. But the global power of social media has meant harmful content can be instantly transmitted across the world, infecting millions of minds, eroding trust in science, and seeding hatred potent enough to spark bloodshed.

UN peacekeeping operations themselves are under attack. Targeted with false allegations at a scale and speed they are not equipped to address, mis- and disinformation is threatening staff safety and hampering life-saving operations in conflict areas.

This has happened against a wider backdrop of rising online hate. Across the board, algorithms that prioritize engagement above all else have driven polarizing views into the mainstream, normalizing antisemitism, Islamophobia, racism, and other hate speech in the process.

Now, rapid developments in generative AI are already taking online hate speech, mis- and disinformation to new levels. Disinformation actors have been given a potent technology with low production cost, to create high quality, but fake image, audio and video content at scale. AI also makes targeting and personalizing that content easier, and often leaves no fingerprints behind.

The UN has long been working on multiple fronts to tackle this crisis — stepping up our online communications to elevate facts and science and working with the platforms to reduce the spread of harmful content.

We’ve had successes - teaming up with large platforms to highlight reliable information on COVID-19 and the climate, amplifying trusted messengers and educating users on how to slow the spread.

But now the time has come to massively ramp up our response and tackle this crisis as a global priority.

The Secretary-General has made it crystal clear: we cannot go on like this.

Published in June, his policy brief 8 lays out nine principles and recommendations that serve as a potential basis for a UN code of conduct that is firmly rooted in human rights.

Social media platforms are often compared to digital town squares. If that’s true, then we want them to be welcoming spaces that enable dialogue and debate, where hate and lies are no longer artificially amplified by algorithms, and where guardrails are enforced to safeguard vulnerable groups.

The UN is seeking action on a range of fronts to achieve this – I’ll highlight just a few.

First, we want to disincentivize online harms. Too many business models rely on algorithms that win attention by pushing extreme content to users, prioritizing engagement over human rights, privacy, and safety.

Instead, we want to encourage alternative revenue streams and models and a new culture of innovation that embraces safety and privacy by design for all internet users, everywhere.

Second, we want meaningful transparency from digital platforms.

Researchers need access to hard data to quantitatively measure the true spread of hate speech, mis- and disinformation, and assess how well current efforts to counter online harms are working — or not. Sober solutions require sober analysis.

There are reasons for hope. We are looking to the implementation of regulatory efforts such as the EU Digital Services Act in the hope this will lead to more transparency in other jurisdictions.

Yet it will be important, from a global perspective, that the interests of all communities are served. We must be careful to guard against a transparency divide.

Third, we want to empower internet users by equipping them with skills to think critically about the content they see and algorithmic awareness to understand why platforms are pushing it to them, giving them a more accurate view of the world beyond the reality created for them by social media.

We have no illusions here - we know that tech changes faster than policy. It’s happening in front of our eyes, with the huge leaps in generative AI. We will continue to demand these tools are designed safely, responsibly, and ethically as we go forward.

We are now engaged in a broad and inclusive consultation process on the development of the code, with the nine principles and recommendations in the policy brief as an entry point. We aim to finalize the code by mid-2024 and hope that Member States will acknowledge it at the Summit of the Future.

It's vital we keep this momentum going. Together, we can sow the seeds of a hopeful digital future, restore balance to our information ecosystem, and integrity to our online public sphere.