Sensitive data and AI: Why data security comes first

29. October 2025 – Mandy Weinand

Guest article by Livia Schröder:

Artificial intelligence has long been part of everyday life in companies and administrations. Chatbots answer customer questions, assistance systems help with texts or documentation, and internal workflows are also increasingly supported by AI. But as helpful as these tools are, they can also be dangerous, especially when sensitive personal data is carelessly entered into external AI systems.

The risk: data disclosure without control

Many AI tools run as cloud services from international providers. Anyone who enters information there automatically places it in the hands of third parties. This entails several risks:

  • Personal data such as names, addresses, medical or financial information could end up in training data or be viewed by third parties.
  • Legal conflicts arise because data flows to countries that are not subject to the same data protection standards as Switzerland or the EU.
  • Confidentiality is not guaranteed, especially when it comes to trade secrets, patient records or employee information, where a single mistake can have serious consequences.

In short, anyone who enters sensitive data into public AI tools risks losing control over their information.

Secure alternatives: How to do it right

Fortunately, there are ways to reap the benefits of modern AI without compromising data security.

1. RAG (Retrieval-Augmented Generation)

With RAG, sensitive information is not transferred directly to the AI model, but stored in a protected database. The AI only accesses the relevant excerpts that are necessary for the respective answer.

Example:
An employee asks a chatbot about holiday entitlement after a wedding. Instead of transferring the entire personnel file to an external AI, the system retrieves the answer from an internal data source and securely adds it to the response.

This means that the knowledge remains within the company, but the AI still provides a precise and helpful answer.

2. Anonymisation of data

Another option is to anonymise or pseudonymise data before using it in AI workflows.

  • Names are replaced with placeholders,
  • identifiers are removed
  • sensitive details are obscured.

This allows many use cases to be mapped without compromising the privacy of employees or customers.

3. Protected AI environments

More and more companies are relying on local AI models or sovereign clouds, in which systems are operated within their own infrastructure. This allows organisations to retain full control over which data is processed and stored.

What does this mean for companies in Switzerland?

For Swiss companies and authorities, the situation is clear: anyone who relies on international standard AI must expect data to end up outside their own sphere of influence. This is not only a data protection issue, but can also entail compliance risks, particularly with regard to the GDPR and the upcoming EU AI Act.

However, those who rely on secure AI architectures with RAG, anonymisation and local control can reap the benefits of modern AI without compromising their own data.

Conclusion:

AI is a key technology, but it must not be used at the expense of data security. Sensitive personal data does not belong in public AI tools.

The solution lies in protected architectures, anonymised data and sovereign infrastructures. Companies that rely on these secure approaches today not only protect their customers and employees, but also secure long-term trust and freedom of action.

Learn more about AI and data in the presentation with Livia Schröder. Contact us for a booking enquiry: 1 (704) 804 1054 or livia.schroeder@premium-speakers.com

Livia Schröder

Entrepreneur of Generation Z, Speaker on AI, Change and the Future of work