
Misinformation is common, especially during wartime. At the intersection of Gen AI and social media, there’s a fast-paced momentum at which it spreads to the masses, often too quickly because it's “caught” by authoritative voices.
Most recently, the spread of misinformation surrounding the Israel-Hamas war is a first-hand example of that.
Gen AI, social media, and the spread of misinformation
AI is not foolproof. AI can often include biases in its training data that will affect outputs. These models are trained on datasets that contain publicly available information, and since they’re anonymous, these aren’t always linked back to sources. Because sources aren't typically cited within these tools, it can become difficult to gauge whether the content reflects the most up-to-date or reliable information.
When it comes to Gen AI changing public opinion, that’s where the tool gets dangerous. Many companies and individuals are becoming more astute of the implications AI. Still, we’ll always find bad players taking advantage of these tools to spread their agenda. AI-based tools have become more sophisticated and accessible and easy to use, resulting in the escalation of disinformation tactics where individuals can develop content to support their false claims.
Disinformation surrounding Israel following October 7
A lot of these methods came into play following October 7, leading to the spread of misinformation about Israel and Gaza within the context of the war. In 2023, not too long following the October 7th attack on Israel by Hamas, war began. And with it—a supercharged spreading of information online.
Photographs and videos of the violence from the October 7 massacre flooded online media outlets, with narratives surrounding the causes, culpabilities and unfolding events taking a particularly strong presence in social media. These narratives swelled into more in-depth conversations surrounding the history of the Israel-Palestine conflict, Zionism and of course, the unfolding Israel-Hamas war.
A lot of the information was unreliable, and social media platforms became awash with false claims. Some of the images being spread by bad players and fake accounts, for example, were reportedly years old, while many were out of context and taken from conflicts in other parts of the world.
Here are some real examples of the types of misinformation surrounding Israel and how it was spread more rapidly with the use of AI, social media, or a combination of both:
On October 17, 2023, Hamas dubiously claimed that Israel was responsible for the explosion of the Al-Ahli Arab Hospital in Gaza City. Despite a lack of confirmation, some of the world’s most prominent news organizations published the report, which was quickly spread through social media.
Elon Musk tells his 150 million followers on X (formerly Twitter) to look towards accounts with a clear history of sharing misinformation. The post had around 11 million views before it was deleted according to the Washington Post.
Posts falsely claiming that the Francis Scott Key Bridge collapse in Baltimore was orchestrated by Israel.
Deep fakes, including AI-generated photographs of murdered babies and other misleading video content, have spread intensely on social media since the start of the war.
An altered photograph of soccer player Lionel Messi was spread online at the start of the war. In the image, Messi appears to be holding a Palestinian flag—which he is not in reality—misleading audiences by showing false support for a pro-Hamas narrative.
Soon after Hamas’s attack, a digitally manipulated White House memo began circulating on X, claiming that the U.S. sent $8 billion in military aid to Israel, which it did not.
These are just some false and harmful claims made following the October 7, 2023, massacre. An entire flood of false accounts appeared on social media to push pro-Hamas narratives. On top of that, one must consider the ongoing activity of bad players and deep fakes on social media that have been in play since the start of the Israel- Hamas war.
A great example is reflected in data that was quickly uncovered by Cyabra, an AI tool that aids in uncovering fake profiles, harmful narratives and GenAI content. Between October 7 and 9, 2023, the tool analyzed 2 million posts, pictures, and videos across Facebook, X (Twitter), Instagram, and TikTok. In their analysis of social media accounts, they uncovered an astonishing 40,695 fake profiles that were pushing pro-Hamas narratives, noting the speed and frequency at which these profiles were posting content to support their narratives (some accounts were even posting hundreds of times per day). Much of this content includes fake news, misinformation and propaganda.
We’ve seen the misinformation fueled even further during the Hamas war
The quick spread of false information has had major implications in the context of the Israel-Hamas war, in which we see the growing impact and appeal of misleading content. Audiences have developed a need for fast-paced news, which is already difficult to find in this war, given the dangerous nature of reporting from the ground. While even news outlets have been caught reporting false information regarding the Israel-Hamas war, the power of social media this time around makes fake news even harder to catch and so much easier to propagate.
Common platforms have turned into news sources during the war, including Instagram, TikTok, X (formerly Twitter) and Facebook. There’s also Telegram channels, which have provided many bad players and innocent readers with unverified media to spread on their social channels.
Since the war started, it's become too easy to find bogus resources: An outdated war scene presented as live footage in Gaza, a fictional scene from a video game posing as fresh footage, a deep-fake video showing an IDF soldier recruiting Ukrainians to fight, an outdated video from Syria being used as “proof” that the IDF was responsible for an explosion at Al-Ahli hospital, a debunked video from TikTok that falsely claimed to show footage of the Nova Festival massacre, resulting in a rumor that the massacre didn’t happen at all.
While many platforms take action when they find misleading material, the speed at which this information spreads on social media is a big challenge. It’s difficult to catch on an hourly basis, and at the pace at which users consume content, misleading media can spread to millions before it's removed. By that time, it’s already made an impression on the people interacting with it—often a visceral response—and these are individuals who many times won’t realize that it's fake.
The war’s polarized nature has also created the space for fueling misinformation even further. While social media should provide a platform for nuance and open communication, in this case, it's become hostile and violent—full of deep-fakes, false narratives and bad players. Many of these communities have taken advantage of the sensationalist nature of information surrounding the war.
Behind the guise of a social media profile, participants in these online conversations are less likely to hide their bias and feel more confident quickly expressing their voice than they would during a long face-to-face encounter. Opinions become facts, emotionally-driven responses become moral arguments, uneducated players with their own agendas become experts—overall, there’s a lack of accountability and fact-seeking on both the sides of creators and the users that is a recipe for disaster.
Not only that, but social media’s algorithms tend to reinforce a user’s viewpoints by showing content that aligns with their preferences. When stuck in an echo chamber of misinformation like this, individuals become even more assured in their bias and confident that they can leave the content unchecked.
Misinformation, sensational content, and how it fuels digital growth
Why do people push fake narratives, and why does this content, by nature, seem to go viral?If we know that we’re confronting an epidemic of misinformation, why isn’t there more due diligence when it comes to sharing impactful yet misleading content? We have the technology and knowledge to differentiate between real and fake, and this is especially true of social media platforms applying sophisticated tools to their algorithms. But for some reason, it’s more beneficial not to make the effort.
The answer is manifold, but for the most part it comes down to publications, social media platforms and bad players gaining profit—whether that’s financial or political. That, and the subsequent sensational experience this content gives to especially emotionally-driven audiences who are invested in and connected to certain narratives.
The sophisticated creation and spread of information, while detrimental to the public, has its benefits for the people creating it. Many social media platforms, and the bad players on there, are driven by profits and political agendas. That’s why false narratives get pushed so often, and social media creates the perfect ecosystem for making that happen.
Many organizations know that certain content will engage audiences, whether it's true or not. Whether it's the success of a business or a nonprofit increasing funding, metrics like followers, clicks and more can directly or indirectly influence these accounts financially—and even the most trusted publications out there want to gain traffic. So, sometimes there are financial benefits reaped from the clicks and views of unchecked claims and fake news. In an added layer of these bogus benefits, social media platforms themselves have become more lax in their ability to combat disinformation and fake profiles—most likely because if they stop sensational content from spreading, they risk allowing their users to lose engagement.
The ease of generating false content is embraced by audiences, especially those who want to consume quickly, for any number of reasons. For example, in times of crisis or after a major event like October 7th, most people flock to social media to gain information. We’re at a time when we’re used to things being reported instantly, which often takes priority over waiting and finding quality reports, or taking the extra step of source checking.
When people feel strongly about a cause, they are eager to find information that validates their views—and this thirst for information within the context of social media can overpower the ability to distinguish fact from fiction. Oftentime, it’s more important to find content that supports their feeling, than making sure the information they receive is trustworthy and reflects the facts.
Social media has also become that place people turn to for information on a major news event. Combined with this and Gen AI— the results are quick-turning content that doesn’t always have the time to be verified before it spreads to the masses. There's a real incentive to fill the void of information with false and sensationalist content.
Even when that misinformation is caught and highlighted on the internet, it still does damage. For example, in the case we mentioned above, when Israel was wrongfully blamed for the hospital explosion in October 2023, the ripple effects were disastrous. Although many news outlets soon retracted their claims, the content had already made an impression on the people who interacted with it. For individuals who have no strong opinions or knowledge of the Israel-Hamas war, that’s a surefire way to influence their opinions and the narratives they support moving forward.
Today, the pace is many times outpacing the verification process. Because of this psychological need to spread information quickly and stay connected, many people don’t realize the potential damage of sharing content without checking or verifying the reliability of the source. Even when it's shared unintentionally, on certain online platforms, clicks and views mean financial benefit —creating an incentive to fill the void with false and sensationalist content that’s getting traction.
Tackling misinformation generated by AI and spread throughout social media is no longer a priority for many of these platforms, too. The benefits of attracting more usership and satisfying their customers in terms of followers and engagement takes priority. One top popular social media outlets like X, Instagram and TikTok, Telegram is an example of a growing platform with little moderation, and is known as one of the biggest perpetrators in allowing the spread of content from extremists and conspiracy theorists sharing violent footage that may have been banned from more mainstream platforms.
Individuals need to take responsibility to protect themselves from the true and the untrue. And while there’s no clear guidelines for how to do this, we’re getting better at learning how to navigate through the noise.
What do reliable sources look like
With the internet more accessible than ever, it's surprising to learn that global internet freedom is on the decline in many parts of the world. This only highlights the repressive power of AI tools when it comes to disinformation and censorship. It’s becoming harder to spot misinformation these days thanks to the sophisticated tactics of bad players we’ve discussed above.
Even with the emergence and popularization of AI detection tools, these aren’t fool proof. Many people don’t realize that the price of freedom means holding ourselves personally accountable for the information we spread, ensuring that they present facts or opinions instead of misinformation. And unfortunately, there’s a real lack of accountability when it comes to requiring social media companies to flag and remove synthetic or suspicious content.
It’s more important than ever to take our own personal responsibility when it comes to spotting reliable vs. fake content, creating our own content and sharing it with the masses. Because of this, you should be equipped with an arsenal of solutions to filter out misinformation and learn how to identify deep fakes. As always, this starts with knowledge, but these days, we’re also lucky enough to have the help of other technology to aid us in navigating the information landscape.
There’s a growing need for awareness, education and regulation when it comes to using AI so that we can thwart its potential to do damage. By actively engaging in critical thinking and utilizing available tools, we can contribute to a more informed and responsible online environment. When perusing your own sources and deciding what information to share, here are some helpful tips to keep in mind:
Build your narrative intelligence: Nothing can compete with continuous learning and building the knowledge needed to discern what is a credible source from manipulative content. To do this, you need to develop your ability to understand and analyze narratives by immersing yourself in diverse perspectives. You can engage in active listening to uncover themes within another’s story and reflect on how these narratives influence your own emotion and decisions. It’s also helpful to know specific contexts and types of narratives that are linked to the spread of misinformation. For example, campus protests have, since October 7th, become fodder for false claims against Israel and Zionism.
Learn how to identify fake content: You are encouraged to learn the signs of fake content, learning how to scrutinize content inconsistencies, unnatural behavior and suspicious activity patterns. Tools like AI detectors and the SIFT method can aid in this process. You’ll want to look out for the ABC’s of disinformation, AKA, “Actors, Behavior and Content.” Look out for unnatural profile activity and inconsistencies. For example, one big difference between authentic social media accounts and fake profiles with a big following is the amount of time their profile has been active. Genuine users may have profiles that span several years—but a telltale sign of a fake account is one where you see a suspiciously quickly established media presence. Many fake profiles also lack personal information, such as a proper name, photos or general information about themselves. On top of this, you can look out for unreliable profile photos, suspicious connections, inconsistent posting activity and extremely polarized content.
Check your sources before you share information: Evaluating the reliability, bias and authenticity of sources is crucial these days. You’ll want to look out for factors like the profile’s lifespan, and activity patterns that reveal inauthenticity. When verifying your sources, cross-check with others to confirm the accuracy of their reporting and the originality of their content. You can also examine the source’s level of expertise, looking out for transparency and bias by finding sensational headlines or a lack of verified quotes and statistics. Overall, readers can encourage critical thinking as they read by questioning unusual or emotionally charged content, and you should always pause before amplifying unverified information.
Use AI-powered tools: Ironically, AI can actually be developed to spot AI-created content. While these tools are still being developed, AI models like Cyabra, DeepFake Detection, or Microsoft's Video Authenticator analyze content authenticity. Use a platform that can help you filter through the specific type of content you’re engaging with and whose mission it is to combat misinformation spreading.

Jenna Romano
Jenna Romano is a writer, editor, and blogger. Her writing has been featured in publications such as Telavivian, Jerusalem Post, Ha’aretz, Portfolio, Wix Blog, and more.