Radio Flea Market

7:00 am - 8:00 am

Fact Check Team: What is “AI slop”, and how is it impacting Americans?

image

AI-generated content is now so common online that it has earned its own nickname: “AI slop.” The term generally refers to low-quality digital content produced in large quantities by artificial intelligence, including fake images, synthetic videos, low-effort articles, political memes and engagement-driven posts.

According to Merriam-Webster, “slop” is defined in this context as “digital content of low quality that is produced usually in quantity by means of artificial intelligence.” Merriam-Webster selected “slop” as its 2025 Word of the Year, pointing to the term’s growing use as AI-generated material spread across the internet.

Experts say the concern is not simply that AI slop is annoying or low-quality. The deeper issue is scale. AI tools make it possible to quickly and cheaply generate fake articles, images, videos and posts that can look legitimate at first glance. According to the European Digital Media Observatory, AI slop can make the information environment less reliable when false or misleading AI-generated material is presented as real. EDMO warned that this can affect how people understand politics, institutions and elections.

That matters because democracy depends on voters being able to separate fact from fiction. When social media feeds, search results and news-style websites are flooded with AI-generated material, it becomes harder for users to know what is real, what is satire, what is propaganda and what is simply engagement bait.

There are already examples of AI-generated content being used in politically charged environments. According to Freedom House’s 2025 “Freedom on the Net” report, AI innovation has helped automate influence operations by lowering their cost and increasing their efficiency. Freedom House cited the example of India and Pakistan after tensions escalated following a terrorist attack in Kashmir in April 2025, saying government-linked influencers and commenters in both countries posted waves of inflammatory AI-generated content that drowned out reliable information.

The U.S. government has also tied AI tools to foreign influence operations. According to the U.S. Treasury Department, it sanctioned Russian and Iranian entities in December 2024 over alleged efforts to interfere in the 2024 U.S. election. Treasury said a Moscow-based group, the Center for Geopolitical Expertise, used generative AI tools to quickly create disinformation and distribute it through a network of websites designed to imitate legitimate news outlets. Treasury also said the group manipulated a video involving a 2024 vice presidential candidate in an effort to sow discord among U.S. voters.

But not all AI slop is political. Much of it is commercial. According to NewsGuard, many AI content farms are “made for advertising” sites, designed to churn out low-quality content and attract programmatic ad revenue. NewsGuard said its system had identified 3,006 AI content farm sites as of March 2026, and that the number had more than doubled over the previous year.

The financial incentive is straightforward: AI makes it cheap to produce huge amounts of content, and online platforms often reward material that generates clicks, views, comments and shares. NewsGuard says these websites often use generic names, publish large volumes of content and may appear to readers like legitimate news or information sites, even when much of the material is AI-generated and not clearly disclosed.

Video platforms are also seeing the trend. According to a 2025 Kapwing study, researchers found that 21% to 33% of a new YouTube user’s feed may consist of AI slop or “brainrot” videos. The Guardian, citing Kapwing’s research, reported that more than 20% of videos recommended to new YouTube users were low-quality, mass-produced AI-generated videos. The same report said Kapwing identified hundreds of entirely AI-generated channels with large audiences and significant estimated revenue.