top of page

Perception Disorder

...

...

Perception Disorder

How does strategic information on social media platforms shape public perception during contemporary conflicts?


Both false and factually correct content can be deployed to manipulate audiences, reinforce binary narratives, and heighten polarization. 


Information whether false or factually correct, can be used strategically on social media platforms to distort perception of users about global events.

MLA

I'm a paragraph. Click here to add your own text and edit me. It's easy.

CHIGACO

I'm a paragraph. Click here to add your own text and edit me. It's easy.

APA

I'm a paragraph. Click here to add your own text and edit me. It's easy.

Lohosha

Sofiia

Lohosha

Fellow

Perception Disorder: How Social Media Turns Information into a Weapon in Modern Conflict


Social media platforms connect people worldwide and create spaces for open discussion, often under the veil of anonymity. However, information circulating in these environments is not always credible. Increasingly, it becomes a strategic tool used by state and non-state actors to influence perceptions of political events and deepen societal divides. Even accurate information can be weaponized when framed deliberately. This poses a serious threat to political realities beyond the digital world, reinforcing polarizing dichotomies such as “good” versus “evil” and undermining trust in institutions.


This article examines how strategically used information shapes public perception during contemporary conflicts. Both false and factually correct content can be deployed to manipulate audiences, reinforce binary narratives, and heighten polarization. To illustrate this, I analyse an anonymized “verified” X (formerly Twitter) account and its role in shaping narratives about the Israel - Hamas conflict.


Misinformation versus disinformation: The distinction matters 

Misleading information on social media platforms is often disseminated by amplifiers with or without intent to mislead or to harm users. However, there is a clear distinction between types of inaccurate information. When it is spread by users who have no goal to deceive the audience, it is classified as misinformation. On the other hand, disinformation is spread purposefully with a strategic aim to misinform and cause harm. Both types are dangerous not only because they distort perceptions of reality but also have a significant impact on the political and security sectors.


The creation of strategic narratives

Ongoing military conflicts, such as the war in Ukraine or the Israeli-Hamas war, are subjects of heated debates in the online arena. Information finds its way into the digital space through posts that are continuously shared by users. The sheer volume of content flooding social media platforms is overwhelming. In my opinion, because people do not have enough time to verify each post, inaccurate information spreads further - with every like, repost or comment. 


Consequently, the online web of misinformation and disinformation, constructed through user engagement, contributes to the creation of strategic narratives about parties to the conflict, framing them in binary terms such as “good” versus “bad”. These dichotomies foster virtual communities of supporters and opponents who then carry their opinions, based on inaccurate information, into the public arena. Over time, such polarized opinions may contribute to violent behaviours often reflected in mainstream media coverage, where news outlets publish reports and headlines emphasizing ethnic aggression, violent attacks, stabbings, or shootings. While I cannot claim the causal relationship between disinformation and violent behaviour, I can provide an example in which information is utilized in a strategic way to construct specific narratives, justify aggressive behaviour and mobilize support for Israel.


Strategic framing of the Israel-Hamas War on X

There are numerous “verified” accounts on X (former Twitter) that publish their opinions on the Israel-Hamas war. The blue checkmark signifies that the account is verified by the platform and implies that it is credible. However, under X’s current model, it is primarily available to premium users who fulfil specific requirements, rather than functioning as a strong identification of the user’s credibility and content accuracy. To illustrate my point, I will use an example of an anonymized verified X profile who publicly identifies as a pro-Israel and reinforces Zionist narratives. This profile has a large follower base that, combined with the blue checkmark, enhances its perceived credibility and its influence on public perception of the parties to the Israeli-Hamas war. The audience is big enough to make information disseminate strategically further into the media world. The posting frequency of this account is high, at least once per hour a new piece of content is published. Most of the content consists of images and videos of battlefields, destroyed buildings, and wounded civilians, many of which are highly graphic. An analysis of a selection of posts indicates that the visuals themselves are not fabricated; however, the accompanying captions strategically frame these images in ways that support a particular narrative. 


Moreover, this profile skilfully utilizes emotional framing; its posts are charged with emotions such as sadness, sorrow, rage, triumph, and disgust. In this case, this technique is mainly used to create binaries “us” versus “them” - “Good Israel” versus “Bad Palestine”. To reinforce this dichotomy, the account selectively uses information to frame it in a strategically beneficial way and misleading information that strengthens the strategic narrative. The comments section, particularly filled with emotional responses, is most of the time aggressive and extreme in its nature. It reminds me of a virtual battlefield of polarized opinions but instead of weapons users deploy memes, sarcasm and curse words. These dynamics has two implications: it polarizes the public perception and facilitates a web of disinformation and misinformation.

 

To conclude I would like to reiterate my point that information whether false or factually correct, can be used strategically on social media platforms to distort perception of users about global events. As illustrated above, these platforms allow verified accounts that cannot necessarily be considered credible to strategically use information and contribute to societal polarization. Though further research shall be done about causality, I believe that opinions, formed in the digital space can flow into the public space and cause violent or even aggressive behaviour. These dynamics emphasise the need to increase media literacy and enhance social media platforms’ verification methods to limit the speed with which false information is spread.

 

By Sophie Lohosha.


Suggested Citation:

Lohosha, S. (2026). "Perception Disorder". EPIS Blog.

bottom of page