Digital Event Horizon
AI slop is flooding Medium, a 12-year-old publishing platform that has undergone numerous pivots over the years. With nearly half of all posts likely created by robots, human curators and moderators must navigate a treacherous landscape to maintain quality control. Can this platform adapt to the changing digital landscape and preserve the value of human creativity in the face of AI-generated content?
Medium's AI-generated content constitutes around 47% of all posts, with tags like "NFT," "web3," and "ethereum" being most likely generated by AI. The platform's CEO, Tony Stubblebine, disputes the notion that Medium has an "AI issue" and views it as a symptom of broader internet problems. Medium relies on human curators to evaluate and surface high-quality content, with a team of 9,000 editors working to refine content for the main page. Despite this approach, some writers and editors report encountering significant amounts of AI-generated content masquerading as human-written posts. The implications of AI-generated content extend beyond Medium, affecting human creators and curators who struggle to be heard amidst the noise.
Medium, a 12-year-old publishing platform that has undergone numerous pivots over the years, is facing an influx of AI-generated content. According to recent studies, nearly half of all posts on the platform are likely created by robots, posing significant challenges for human curators and moderators who strive to maintain quality control.
The platform's CEO, Tony Stubblebine, acknowledges that AI-generated content has increased substantially in recent months, but disputes the notion that Medium has an "AI issue." Instead, he views it as a symptom of a larger problem - the proliferation of low-quality content on the internet. According to Pangram Labs, an AI detection startup that analyzed 274,466 recent posts on Medium over a six-week period, the platform's AI-generated content constitutes around 47% of all posts.
This finding is strikingly high compared to other platforms, with Pangram Labs' analysis of one day of global news sites this summer revealing only 7% of content as likely generated by AI. The tags with the most likely AI-generated content on Medium included "NFT," "web3," "ethereum," "AI," and "pets," with a staggering 78% of all articles tagged with these phrases being deemed as likely AI-generated.
Originality AI, another AI detection startup, conducted its own analysis of Medium posts from 2018 and compared it to a sampling from this year. The results suggested that just over 40% of the 473 articles published in 2024 were likely generated by AI. While both companies' analyses came to similar conclusions about the scope of AI content on Medium, Stubblebine disputed the premise of their findings and questioned the reliability of AI detectors.
Stubblebine's approach to addressing the issue is decidedly different from that of other platforms. Unlike LinkedIn and Facebook, which explicitly encourage users to leverage AI tools for content creation, Medium has taken a more cautious stance. The platform has updated its AI policy, removing the ability for AI-generated content to be paywalled in its Partner program or promoted through affiliate links.
Instead, Medium relies on human curators to evaluate and surface high-quality content. According to Stubblebine, "Medium basically runs on human curation now." The platform boasts a team of 9,000 editors who work tirelessly to refine the content that makes it onto the platform's main page. While this approach may not be foolproof, Stubblebine believes that it is more effective in filtering out low-quality content.
However, not all writers and editors on Medium share Stubblebine's confidence in the platform's moderation strategy. Some report encountering a significant amount of AI-generated content, often masquerading as human-written posts. Marcus Musick, an editor for several publications on the platform, estimates that he rejects around 80% of potential contributors due to suspicions that they are using AI.
While the volume of likely AI-generated content on Medium is concerning, it pales in comparison to the broader problem of low-quality content flooding the internet. As Pangram Labs' CEO Max Spero noted, "This is a couple orders of magnitude more than what I see on the rest of the internet."
The implications of this phenomenon extend far beyond Medium itself. The proliferation of AI-generated content has significant consequences for human creators and curators who struggle to be heard amidst the noise. As Stubblebine himself acknowledged, "We're strongly against AI content," but it remains an inescapable reality that platforms like Medium must navigate.
Ultimately, the battle for curation on Medium and other platforms will continue to evolve as AI tools become increasingly sophisticated. While some may view this as a threat to human creativity, others see it as an opportunity for innovation and growth. As Stubblebine noted, "I think you could, if you're being pedantic, say we're filtering out AI," but the true challenge lies in distinguishing between high-quality content created by humans and low-quality content generated by machines.
In this cat-and-mouse game, human curators and moderators on Medium must be vigilant in their efforts to surface the best of human writing. While AI-generated content may become increasingly prevalent, it is up to these individuals to preserve the integrity and value of creative expression on the platform.
Related Information:
https://www.wired.com/story/ai-generated-medium-posts-content-moderation/
Published: Mon Oct 28 17:36:32 2024 by llama3.2 3B Q4_K_M