1. Blog >
  2. Business
  3. How We Ensure Safe, Compliant, and Fair AI at LittleBig Connection ?
September 22, 2025

How We Ensure Safe, Compliant, and Fair AI at LittleBig Connection ?

Published by

  • Léo Galera
Safe and fair AI

At LittleBig Connection, we leverage artificial intelligence to enhance how companies identify and manage external talent. As AI becomes a bigger part of daily business operations, the demand for transparency, fairness, and compliance continues to rise. This is especially relevant in light of the EU AI Act, which introduces clear guidelines for responsible AI use.

In this article, we’ll walk you through the AI features on our platform, the technology behind them, and how we protect data, ensure fairness, and stay compliant every step of the way.

AI Features on the Platform

To offer an augmented procurement experience, we provide our clients with two key AI-powered features to our clients:

AI RFP Writer

This generative assistant helps users create or improve RFP titles and descriptions. It simplifies drafting and ensures consistent quality in project briefs.

AI Score

This feature evaluates how well a consultant matches a Request for Proposal (RFP). It compares the RFP’s title and description with the consultant’s professional profile and returns a numerical score along with a written explanation. The AI Score is designed to assist decision-making, not replace it.

What models do we use to power our AI features ?

Our AI features are powered by OpenAI models provided through the Azure OpenAI Service.

These models are accessed exclusively through Microsoft's Azure infrastructure. We do not use OpenAI’s public API or services such as ChatGPT. All AI workloads are processed through Azure France Central, ensuring that data is stored and handled within the European Union in compliance with GDPR and EU data residency requirements.

How Azure Guarantees Data Privacy and Isolation ?

All AI features of our solution are powered by Microsoft’s Azure OpenAI Service. This means that when our platform uses AI to assist with tasks like proposal matching or RFP writing, the data is processed within Microsoft’s secure cloud infrastructure. It does NOT go to OpenAI directly, and it is NOT handled by public AI services.

Here is what this means for you as a user:

  • The information sent to the AI, such as job descriptions or candidate skills, is NOT shared with any other Microsoft customer.

  • Your data is NOT accessible to OpenAI.

  • None of the content you enter is used to train or improve OpenAI models or Microsoft services.

  • All processing takes place within Microsoft’s Azure data centers located in the European Union, specifically in the Azure France Central region.

  • The outputs generated by the AI belong to you and are handled in full compliance with GDPR and European data protection standards.

Your data stays private. It’s used only to deliver AI features on our platform and never for external training or commercial use.

For full details, please refer to Microsoft’s official policy: Azure OpenAI Service Data, Privacy, and Security.

Privacy by Design and Data Minimization

We follow strict data minimization practices in all AI interactions. For example, in the AI Score feature, we do not transmit any personally identifiable information. The model never sees names, photos, contact details, age, gender, or nationality.

We apply strict data minimization principles in every AI interaction. For example, when using features like AI Score, we ensure that no personally identifiable or bias-prone information is ever shared.

The AI model never sees names, photos, contact details, age, gender, or nationality. Instead, we focus only on role-relevant, professional attributes that matter for the task at hand.

This approach helps us reduce the risk of bias and ensures that AI scoring is based solely on relevant experience, skills, and qualifications.

Fairness and Bias Mitigation

Our AI is designed to support fair, unbiased decisions. By removing demographic and sensitive data from inputs, we eliminate variables that could lead to discrimination. The model focuses only on what matters: skills, experience, and fit for the role.

We also run regular internal audits to review how the AI performs. If we spot any inconsistencies or unintended bias, we act fast to correct them, ensuring fairness stays at the core of everything we do.

Compliance with the EU AI Act

We design and operate our AI features in alignment with the principles and obligations set out in the EU AI Act. Our approach includes the following measures:

  • Human oversight. AI is used to support, not replace, decision-making. Final decisions remain in the hands of human users.

  • Transparency. We clearly label all AI-powered features on the platform with the "AI" tag so that users understand when they are interacting with artificial intelligence. Additionally, every AI-generated score or suggestion is accompanied by a written explanation.

  • Bias mitigation. We exclude all personal or demographic data from the AI input, ensuring that evaluations are based only on relevant professional information such as skills, experience, and education.

  • Secure infrastructure. All AI processing takes place in the Azure France Central region, ensuring data remains within the European Union.

  • Documented governance. Our data processing pipelines are secure, traceable, and compliant with GDPR and internal standards.

These measures help ensure that our AI systems are not only effective but also trustworthy, lawful, and fair.

In addition, we are conducting audits about our use of data, and are complying with ISO 27001 and ISO 9001. We’re also certified by CyberVadis with a score of 970/1000.

Questions and Further Information

If you are a client, prospective partner, or simply interested in how we apply AI responsibly, we encourage you to reach out to your regular point of contact or get in touch with our product team.

We are committed to building technology that is secure, transparent, and fair. As regulations and expectations evolve, we will continue to strengthen our safeguards and share updates on our approach to responsible AI.

LittleBig Connection Blog

Find out more articles
on the same subject