Today's AI/ML headlines are brought to you by ThreatPerspective

Digital Event Horizon

Protecting Machine Learning Models: A New Era of Security and Transparency



New Partnership Brings Enhanced Model Security to Hugging Face Hub as Protect AI Launches Four New Threat Detection Modules
In a significant move towards enhancing model security, Protect AI has partnered with Hugging Face to launch four new threat detection modules for the Hugging Face Hub. This partnership aims to provide a safer and more secure experience for developers using machine learning models from the hub.


  • Protect AI has partnered with Hugging Face to launch four new threat detection modules for the Hugging Face Hub.
  • These new modules detect sophisticated threats, including archive slips, joblib model suspicious code execution, TensorFlow SavedModel backdoors, and Llamafile malicious code execution.
  • Guardian's detection capabilities have been expanded to cover more model file formats and detect additional obfuscation techniques, including the high severity CVE-2025-1550 vulnerability in Keras.
  • The partnership aims to democratize access to open-source AI while providing users with a secure and reliable experience.
  • Protect AI has identified over 352,000 unsafe/suspicious issues across 51,700 models on the Hugging Face Hub.



  • The world of machine learning is rapidly evolving, with the number of threats to model security growing exponentially. In response to this challenge, Protect AI has partnered with Hugging Face to launch four new threat detection modules for the Hugging Face Hub. This partnership aims to provide a safer and more secure experience for developers using machine learning models from the hub.

    Since October 2024, Protect AI has significantly expanded Guardian's detection capabilities, improving existing threat detection capabilities and launching four new detection modules. These new modules are designed to detect a range of sophisticated threats, including:

    * PAIT-ARV-100: Archive slip can write to file system at load time
    * PAIT-JOBLIB-101: Joblib model suspicious code execution detected at model load time
    * PAIT-TF-200: TensorFlow SavedModel contains architectural backdoor
    * PAIT-LMAFL-300: Llamafile can execute malicious code during inference

    These new modules are part of a broader effort to enhance Guardian's detection capabilities and provide users with critical security information via inline alerts on the platform. Additionally, Guardian covers more model file formats and detects additional sophisticated obfuscation techniques, including the high severity CVE-2025-1550 vulnerability in Keras.

    The partnership between Protect AI and Hugging Face has been a natural fit from the start, as both organizations share a commitment to safety and security in the development of machine learning models. By working together, they aim to democratize access to open-source AI while providing users with a secure and reliable experience.

    As of April 1, 2025, Protect AI has successfully scanned 4.47 million unique model versions in 1.41 million repositories on the Hugging Face Hub. To date, Protect AI has identified a total of 352,000 unsafe/suspicious issues across 51,700 models. In just the last 30 days, Protect AI has served 226 million requests from Hugging Face at a 7.94 ms response time.

    The partnership between Protect AI and Hugging Face is part of a broader effort to improve model security and provide users with a safer experience. By leveraging in-house threat research teams and the huntr community, Protect AI aims to develop new and more robust model scans as well as automatic threat blocking for Guardian customers.

    According to reports from the huntr community, certain trends have emerged in terms of common attack themes. Library-dependent attack chains are becoming increasingly prevalent, with attackers using functions from libraries present in the ML workstations environment to invoke malicious code. Payload obfuscation is also a growing concern, with attackers using techniques like compression, encoding, and serialization to hide payloads in models.

    Framework-extensibility vulnerabilities are another significant threat, as custom layers, external code dependencies, and configuration-based code loading can create dangerous attack vectors. Attack vector chaining is also becoming more common, with multiple vulnerabilities being combined to create sophisticated attack chains that bypass detection.

    To address these threats, Protect AI has expanded its model vulnerability detection capabilities, enhancing detection of library-dependent attacks, uncovered obfuscated attacks, detected exploits in framework extensibility components, and identified additional architectural backdoors. Guardian now covers more model file formats and detects additional sophisticated obfuscation techniques, providing users with critical security information via inline alerts on the platform.

    In conclusion, the partnership between Protect AI and Hugging Face marks a significant step forward in enhancing model security for developers using machine learning models from the hub. By launching four new threat detection modules and expanding Guardian's capabilities, they aim to provide a safer and more secure experience for users. As the world of machine learning continues to evolve, it is essential that we prioritize model security and transparency.



    Related Information:
  • https://www.digitaleventhorizon.com/articles/Protecting-Machine-Learning-Models-A-New-Era-of-Security-and-Transparency-deh.shtml

  • https://huggingface.co/blog/pai-6-month

  • https://huggingface.co/blog/protectai

  • https://huggingface.co/docs/hub/security-protectai


  • Published: Mon Apr 14 13:28:53 2025 by llama3.2 3B Q4_K_M











    © Digital Event Horizon . All rights reserved.

    Privacy | Terms of Use | Contact Us