AIA Guard is an end-to-end solution that automatically analyses your entire machine learning workflow with particular attention to data poisoning, model interpretability, data leakage and adversarial machine learning, designed for data scientist that would use AIA Guard to receive adversarial samples and feedback to handle the models they are implemented on intending to use. AIA Guard is a project developed by Datrix with the support of Rheasoft. Datrix is a tech company group specialised in Augmented Analytics and Machine Learning, listed on Euronext Growth Milan. Rheasoft is an IT development company operating within a wide range of IT aspects, including application development, data migration, complex integrations, and cloud development.
The solution is composed of three modules:
– Adversarial Attacks Defence: focuses on defending against adversarial attacks on machine learning models, improving the robustness of AI systems against such attacks.
– Data Anonymization and Privacy Preservation: focuses on protecting sensitive information and privacy within the AI ecosystem. By anonymizing data used for training and inference, the solution ensures compliance with privacy regulations and minimises the risk of data breaches.
– Interpretability for AI Transparency: focuses on enhancing the interpretability of AI models, providing insights into their decision-making processes, allowing users to better understand and trust the model outputs thus helping the adoption of AI technologies.