Arabi Facts Hub is a nonprofit organization dedicated to research mis/disinformation in the Arabic content on the Internet and provide innovative solutions to detect and identify it.

Methodology & Tools: How to Distinguish Real Accounts from Fake Ones?

Methodology & Tools: How to Distinguish Real Accounts from Fake Ones?

 

 

This educational article is published in collaboration between Arabi Facts Hub (AFH) and the International Journalist Network (IJNET)  

 

In an era where social media platforms have turned into open arenas for conflict and information warfare, fake accounts have become a key weapon of digital disinformation. These accounts adopt virtual identities that mimic real users and no longer limit themselves to spreading false information or distorting context. Instead, they actively shape public opinion, fuel hate speech, and deepen social divisions.

 

Known as “trolls” or Internet trolls, such accounts are often hard to detect at first glance. They rely on sophisticated identity-building tactics that give them an appearance of credibility. This is why mastering digital identity analysis is essential for journalists, researchers, and fact-checkers alike. This article presents an integrated methodology for analyzing online identities—starting with technical and behavioral indicators and extending to the use of algorithms and specialized tools to help detect automated or coordinated accounts.



Fake Accounts

They are fake digital identities created on social media platforms, often for deceptive purposes such as spreading disinformation, fueling division, or manipulating public opinion. These accounts are not tied to real individuals; instead, they may be operated by individuals, coordinated groups, or even automated programs.

They are carefully designed to appear authentic, complete with realistic names, profile photos, and bios.

 

One common type of these accounts is known as the troll.” This is a widespread form of fake account designed to provoke others through hostile or controversial comments, with the aim of drawing attention and stirring debate. The term “trolling” originates from fishing with bait, where provocative statements serve as bait to attract interaction.

 

One common form of these accounts is known as a “troll”, a frequent type of fake profile designed to provoke others through hostile or controversial comments in order to attract attention and spark arguments.

Fake accounts are no longer limited to mere annoyance or minor manipulation. They are now widely used for:

  • Disinformation campaigns and the manipulation of public opinion.
  • Amplifying or discrediting individuals or issues through coordinated hashtags.
  • Spreading hate speech or incitement against specific groups or communities.
  • Engaging in digital espionage or running propaganda campaigns in favor of political entities.

 

What is dangerous about these accounts is that they are managed intelligently. They reply, share, and interact in a seemingly authentic way, making it difficult to detect them based on content alone.

 

Methodology for Detecting Fake and Automated Accounts

The methodology for verifying digital identities utilizes two primary indicators: technical and behavioral. The most practical criteria include:

A newly created account showing intense activity or inflammatory rhetoric can be a red flag. Some campaigns either reactivate dormant accounts or create instant accounts at the outbreak of major events.
  1. Account creation date: A newly created account that suddenly shows high posting activity or inflammatory rhetoric can raise suspicion. In many campaigns, dormant accounts are reactivated, or instant accounts are created during major events.
  2. Type and frequency of content: Fake accounts tend to publish uniform political content or polarizing messages, often lacking variety or everyday life details. Automated accounts, in particular, focus on frequent reposting from specific sources rather than generating original material.

 

Arabi Facts Hub published a report titled SDF Enemy of Syria: Pro-Assad Accounts Push Inciteful Rhetoric Under the Guise of Sovereignty, which presented clear examples of fake and semi-automated accounts that concentrate on heavy reposting from a limited set of sources, while producing uniform or provocative content and showing no signs of everyday, diverse activity.

The data classified these accounts as part of a coordinated inauthentic network: they repeatedly circulated the same messages according to pre-set timing and posting schedules. A deeper network analysis using the Louvain algorithm revealed three main clusters that are tightly connected internally but weakly linked to other clusters, indicating closed groups often operated through separate “digital control rooms.”

Analysis of likes and reposts exposed an unbalanced pattern of one-sided amplification rather than natural user interaction. A closer look at an individual account showed almost no original content: it recycled images and videos into a single narrative and posted at a rate approaching one item every three minutes — a cadence unlikely for a human user without scheduling tools.

 

​​3 – Profile picture and bio: Fake accounts often use stolen or unrealistic photos. Reverse image searches can help identify these. The bio sections of these accounts are usually either empty or filled with political slogans or aggressive rhetoric.

For example, the platform Matsada’sh published a report titled Fake Accounts and Identical Comments: ‘Organi Ultras Praise Father and Son for Driving the Economy Forward.” The report investigated digital identity verification methods for stolen photos. It revealed, through reverse image searches, that these images were widely circulated online and belonged to individuals whose identities remain unknown.

 

4 – Interaction patterns: Accounts that comment exclusively in hostile language, interact only with a narrow circle of other accounts, or engage solely in specific hashtags are likely part of a coordinated or automated network.

5 – IP address and “geographic hopping”: If an account appears to tweet from New York, but writes in Russian and interacts with Asian accounts, this might be a red flag worth stopping at. Similarly, rapid “geographic hopping” between multiple countries in a short period is a strong indicator of suspicious activity.

For example, in a report titled “Matsada’sh Tracks the Hashtag ‘State Intelligence is The Nation’s Shield’… Egyptian Bots and Fake Accounts Promote the Hashtag’s Content”, the platform showed that one of the accounts analyzed using Meltwater—along with other open sources—appeared to be located in Syria. However, an analysis of the content it posted revealed that the user was not Syrian, but was most likely concealing their true location, especially since they adopted an Egyptian identity by placing the Egyptian flag next to their username.

 

Tools to Help You Analyze Digital Identity

Spot The Troll: an interactive training website that presents eight sample accounts (real or fake) and asks the user to classify them based on an analysis of digital behavior. It encourages critical thinking and provides a detailed analysis after each answer.

Botometer: A tool that analyzes accounts on “X” to determine how automated they are, by examining activity patterns, number of posts, posting times, and interactions with others. It assigns each account a “bot score” from 0 to 5 — the closer to 5, the higher the likelihood that the account is fake.

 

Hoaxy : Allows you to track links and information that spread on “X” and identify the accounts promoting that content. It visualizes the interaction network around a specific claim or hashtag, revealing key accounts driving the circulation of misleading content and helping detect coordinated campaigns.

 

PimEyes : A reverse image search engine used to trace the source of profile photos. You can upload an image (such as a suspicious profile picture), and it scans the web for any other places where that image appears, helping reveal whether it’s stolen—a common sign of fake accounts.

 

The Role of Algorithms in Amplifying Fake Accounts

Social media algorithms do not distinguish between real and fake accounts; they rely instead on engagement signals such as the number of likes and posting frequency. Fake accounts leverage rapid and repetitive content dissemination to gain a visibility advantage.

Real-world example: In one of the Spot The Troll training exercises, a profile initially appeared to be a genuine influencer because of its high posting volume. Deeper analysis, however, revealed it was a bot repeatedly sharing the same messages at high speed to satisfy the algorithm. 

The challenge is compounded by the fact that platforms rarely disclose how their algorithms work, making verification harder for fact-checkers. This is why it is important to support manually verified content and not just follow a trend.

Malinformation is so rampant, making the ability to identify fake accounts an essential, rather than optional, skill to confront the flood of disinformation and hate speech. Identifying real and fake accounts requires more than just superficial checks. A comprehensive methodology is essential, integrating both technical and behavioral indicators, supported by smart tools and practical training.