Securing AI
Integrating federated learning with additional security techniques, can significantly enhance the protection of AI applications, ensuring data privacy, regulatory compliance, and robust defence against threats.
The LeakPro project aims to build an open-source platform designed to evaluate the risk of information leakage in machine learning applications. It assesses leakage in trained models, federated learning, and synthetic data, enabling users to test under realistic adversary settings.
Built in collaboration AstraZeneca, AI Sweden, RISE, Syndata, Sahlgrenska University Hospital, and Region Halland.
Related projects
A selection of our current public cybersecurity projects
IoT IDS
An advanced Intrusion Detection System (IDS) for IoT using
federated learning, enhancing security and privacy by
leveraging decentralised data analysis without compromising
data privacy.
Learn more »
AI Honeypots
A new approach to AI security by integrating honeypots into
federated learning networks to identify unknown threats and
use the collected data to create resilient AI solutions.
Learn more »
Interstice
Intelligent security solutions for connected vehicles,
focusing on on-vehicle intrusion detection to evaluate risks
and identify realistic attack vectors. With Scania CV as the
principal coordinator.
Learn more »
Secure Enclaves (TEE)
A solution for mitigating the challenge of protecting and
ensuring trusted execution of machine learning on local
clients using secure enclaves.
Learn more »
Partners
Our AI security projects bring together a network of trusted partners and leading experts in machine learning, and cybersecurity.