MERDA – A Framework For Countering Disinformation

Yesterday, on an conference about disinformation, I jokingly coined the acronym MERDA (Monitor, Educate, React, Disrupt, Adapt) for countering disinformation. Now I’ll put the pretentious label “framework” and describe what I mean by that. While this may not seem a very technical topic, fit for a techblog, in fact it has a lot of technical aspects, as disinformation today is spread through technical means (social networks, anonymous websites, messengers). And therefore especially the “Disrupt” part is quite technical.

Monitor – in order to tackle disinformation narratives, we need to monitor them. This includes media monitoring tools (including social media) and building reports on rising narratives that may potentially be disinformation campaigns. These tools include a lot of scraping online content, and consuming APIs where such exist and are accessible. Notably, Facebook removed much of their API access to content, which makes it harder to monitor for trends. It has to be noted that this doesn’t mean monitoring individuals – it’s just about trends, keywords, phrases – sometimes known, sometimes unknown (e.g. the tool can look for very popular tweets, extract the key phrases from it, and then search for that). Governments can list their “named entities” and keep track of narratives/keywords/phrases relating to these named entities (ministers, prime minister, ministries, parties, etc.)

Educate – media literacy, and social media literacy, is a skill. Recognizing “Your page will be disabled if you don’t click here” scams is a skill. Being able to recognize logical fallacies and propaganda techniques is also a skill and it needs to be taught. Ultimately, the best defense against disinformation is a well informed and prepared public.

React – public institutions need to know how and when to react to certain narratives. It helps if they know them (through monitoring), but they need the so called “strategic communications” in order to respond adequately to disinformation about current events, debunking, pre-bunking and giving the official angle (note that I’m not saying the official angle is always right – it sometimes isn’t, that’s why it has to be supported by credible evidence).

Disrupt – this is the hard part – how to disrupt disinformation campaigns. How to identify and disable troll farms, which engage in coordinated inauthentic behavior – sharing, liking, commenting, cross-posting in groups – creating an artificial buzz around a topic. Facebook is, I think, quite bad at that – this is why I have proposed a local legislation that requires following certain guidelines for identifying troll farms (groups of fake accounts). Then we need a mechanism to take them down, which takes into account freedom of speech – i.e. the possibility that someone is not, in fact, a troll, but merely a misled observer. Fortunately, the digital services act provides for out-of-court appeals for moderator decisions.

The “disrupt” part is not just about troll farms – it’s about fake websites as well. Tracking linked websites, identifying the flow of narratives through these websites, trying to find the ultimate owners, is a hard and quite technical task. We know that there are thousands such anonymous websites that repost, in various languages, disinformation narratives – but taking down a website requires good legal reasons. “I don’t like their articles” is not a good reason.

The “disrupt” part also needs to tackle ad networks – some obscure ad networks are the way disinformation websites get financial support. They usually advertise not-so-legal products. Stopping the inflow of money is one way to reduce disinformation.

Adapt – threat actors in the disinformation space (usually nation-states like Russia) are dynamic and they change their tactics, techniques and procedures (TTPs). Institutions that are trying to reduce the harm of disinformation also need to be adaptable, to constantly look for new ways of getting the false or misleading information through.

Tackling disinformation is walking on thin ice. A wrong step may be seen as curbing free speech. But if we analyze patterns and techniques, rather than content itself, then we are on mostly on the safe side – it doesn’t matter what the article says, if it’s shared by 100 fake accounts and the website is supported by ads of illegal drugs that use deep fakes of famous physicians.

And it’s a complicated technical task – I’ve seen companies claiming they identify troll farms, rings of fake news website, etc. But I haven’t seen any tool that’s good enough. And MERDA … is the situation we are in – active, coordinated exploitation of misleading and incorrect information for political and geopolitical purposes.

3 thoughts on “MERDA – A Framework For Countering Disinformation”

  1. I wouldn’t necessarily say that the content of an article doesn’t matter if you identified the techniques. Imagine you train a tool to identify fake articles by these techniques and they are maliciously used on legitimate articles and sources.

    Analyzing an article of being fake or not is the Holy Grail here. For automation tools to exist on this, the process should be very clear for a human, and I don’t think that’s the case.

    In short: how do you prepare for fake news?

  2. Your MERDA framework for countering disinformation, especially in the context of the ‘Disrupt’ component, brings an interesting technical perspective to a largely non-technical field.

    But my question is about the ethical considerations this approach. Specifically, how do you balance the need for effective disruption of disinformation campaigns with the essential principles of freedom of speech and digital privacy?

Leave a Reply

Your email address will not be published. Required fields are marked *