Uses cases

From enhanced machine learning in cybersecurity and defence to risk mitigation and intelligent intrusion detection in vehicles.

Securing AI

Integrating federated learning with additional security techniques, can significantly enhance the protection of AI applications, ensuring data privacy, regulatory compliance, and robust defence against threats.


Risk evaluation
LeakPro
The LeakPro project aims to build an open-source platform designed to evaluate the risk of information leakage in machine learning applications. It assesses leakage in trained models, federated learning, and synthetic data, enabling users to test under realistic adversary settings.

Built in collaboration AstraZeneca, AI Sweden, RISE, Syndata, Sahlgrenska University Hospital, and Region Halland.


Partner logos
Partner logos

Related projects

A selection of our current public cybersecurity projects

IoT IDS

IoT IDS

An advanced Intrusion Detection System (IDS) for IoT using federated learning, enhancing security and privacy by leveraging decentralised data analysis without compromising data privacy.
Learn more »

AI Honeypots

AI Honeypots

A new approach to AI security by integrating honeypots into federated learning networks to identify unknown threats and use the collected data to create resilient AI solutions.
Learn more »

Interstice

Interstice

Intelligent security solutions for connected vehicles, focusing on on-vehicle intrusion detection to evaluate risks and identify realistic attack vectors. With Scania CV as the principal coordinator.
Learn more »

Secure Enclaves

Secure Enclaves (TEE)

A solution for mitigating the challenge of protecting and ensuring trusted execution of machine learning on local clients using secure enclaves.

Learn more »


Partners

Our AI security projects bring together a network of trusted partners and leading experts in machine learning, and cybersecurity.

partners

Medical AI

AI has great potential in medicine for tasks like pathology, organ segmentation, and anomaly detection. However, it relies on doctors manually creating annotated data, which is essential for training effective machine learning models.


Medical AI
Itea Assist
The challenge in medical AI is that sensitive health data, owned by customers and regulated, cannot be shared or moved off-site, limiting its use in improving AI.

Single clinics often lack diverse, large datasets, and anonymizing data is costly and reduces utility. Federated learning addresses this by allowing multiple clinics to contribute to AI model improvements without moving or sharing their data.


Partners

The Iteas Assist project brings together a group of expert partners in the fields of machine learning, therapy and medical imaging.

partner logos