Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

The AI-Generated Misinformation Epidemic: How Google's AI Overviews are Spreading Disinformation on National IQ Scores


Google's AI Overview tool has been found to be generating summaries of search results based on discredited national IQ scores. The misuse of this technology raises concerns about the spread of misinformation and its potential impact on racial equality and social justice.

  • Google's AI Overview tool has been found to be generating summaries of search results based on discredited data, including J.P. Lynn's work on national IQ scores.
  • The misuse of this technology raises concerns about the spread of misinformation and its potential impact on racial equality and social justice.
  • Lynn's data has been widely criticized for being biased and based on unrepresentative samples, yet it continues to be cited by some researchers.
  • Other AI-powered search services have also generated similar summaries based on Lynn's data, highlighting the need for greater accountability and oversight in AI development.



  • In recent months, a disturbing trend has emerged in the realm of artificial intelligence, where search engines have begun to generate summaries of search results that often rely on discredited and racist data. At the heart of this phenomenon lies Google's new AI Overview tool, which aims to provide users with a concise summary of their search queries. However, this innovative technology has also been hijacked by malicious actors who are using it to spread misinformation on national IQ scores.

    According to experts, the issue at hand is not just a matter of AI-generated content being incorrect or outdated but rather a symptom of a deeper problem – the misuse of scientific data to promote racist ideologies. Dr. Rebecca Sear, director of the Centre for Culture and Evolution at Brunel University London, points out that the use of these data spreads disinformation and helps perpetuate the political project of scientific racism.

    To understand the scope of this issue, it is essential to delve into the world of national IQ scores. In recent years, researchers have made significant strides in measuring intelligence quotient (IQ) across different populations. However, these efforts have been marred by controversies surrounding the methodology and data quality used in some studies.

    One of the most notorious examples of flawed research comes from the work of J.P. Lynn, a self-proclaimed expert on national IQ scores who has published various versions of his dataset over the years. Lynn's data has been widely cited in academic circles and has even been used by far-right groups to promote racist ideologies.

    However, experts have long raised concerns about the quality and validity of Lynn's work. In 2020, a preprint study found that Lynn systematically biased his database by preferentially including samples with low IQs while excluding those with higher IQs for African nations. Furthermore, Lynn's data has been criticized for being based on unrepresentative samples, such as children living in orphanages.

    Despite these criticisms, Lynn's work continues to be cited by some researchers and has even been used to inform Google's AI Overview tool. However, this reliance on discredited data is a significant concern, particularly when it comes to issues of racial equality and social justice.

    Google's AI Overview tool was launched earlier this year as part of its effort to revamp its all-powerful search engine for an online world being reshaped by artificial intelligence. For some search queries, the tool provides an AI-generated summary of its findings, which can sometimes lead to incorrect or misleading information.

    In a recent investigation, WIRED found that Google's AI Overview tool was frequently citing Lynn's work on national IQ scores, often without providing proper context or citations. When users queried about specific countries, such as Pakistan or Sierra Leone, the tool provided summaries based on Lynn's data, which has been widely debunked by experts.

    However, Google has since admitted to making mistakes with its AI Overview rollout and has taken steps to address these issues. The company has implemented guardrails and policies to protect against low-quality responses and has removed overviews that do not align with its standards.

    Despite these efforts, WIRED discovered that other AI-powered search services, including Perplexity and Microsoft's Copilot chatbot, were also generating similar summaries based on Lynn's data. These findings raise concerns about the spread of misinformation through online platforms and the need for greater accountability and oversight in the development and deployment of AI technologies.

    In conclusion, the use of national IQ scores to promote racist ideologies is a pressing concern that requires immediate attention from policymakers, researchers, and technology companies alike. The misuse of scientific data to perpetuate disinformation is a symptom of a deeper problem – a lack of critical thinking and nuance in our understanding of intelligence quotient.

    As AI-generated content continues to evolve, it is crucial that we prioritize the development of more robust and transparent systems that can accurately convey information without relying on flawed or biased sources. By doing so, we can ensure that these powerful technologies are used to promote knowledge, understanding, and social justice rather than perpetuating misinformation and racism.

    Google's AI Overview tool has been found to be generating summaries of search results based on discredited data, including J.P. Lynn's work on national IQ scores. The misuse of this technology raises concerns about the spread of misinformation and its potential impact on racial equality and social justice.



    Related Information:

  • https://www.wired.com/story/google-microsoft-perplexity-scientific-racism-search-results-ai/


  • Published: Thu Oct 24 05:16:22 2024 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us