AI Transparency

Agentoryx AI Transparency Statement

Agentoryx AI Transparency

The responsible use of artificial intelligence is a core element of our product philosophy and service model. We deploy AI technologies exclusively where they provide clear, measurable value, improve operational processes, or support informed strategic decision-making. At the same time, we ensure that all AI use remains transparent, lawful, and fully aligned with the requirements of the European Union AI Act. Transparency, security, fairness, and reliability form the foundation of our approach to AI systems.

Our AI models and AI-supported functions are used solely for clearly defined purposes. They assist users, for example, with data analysis, process automation, decision support, pattern recognition, or text processing. Wherever AI is used, this is communicated openly. Decisions with legal or materially significant organisational impact are not taken fully automatically. Qualified, responsible professionals remain in control and perform the final assessment and decision.

We place strong emphasis on the quality and appropriateness of data used in AI-supported processes. Data is collected and processed only to the extent necessary for the respective purpose. Wherever possible, we apply data-minimisation techniques, anonymisation, pseudonymisation, or technical methods that reduce the identifiability of individuals. The processing of personal data is carried out exclusively in accordance with the GDPR and on valid legal bases. Data sources, model foundations, and training data are carefully reviewed, documented, and—where legally permissible—explained transparently.

Potential risks associated with the use of AI technology are systematically assessed. This includes risks related to data protection, information security, incorrect or misleading outputs, model bias, transparency requirements, and potential impacts on individuals or organisations. Risk assessments follow structured procedures aligned with the EU AI Act and recognised best practices. Where risks cannot be entirely eliminated, they are minimised, clearly communicated, and addressed through appropriate technical and organisational measures.

We comply with the requirements of the European Union AI Act and adhere to its principles for safe and trustworthy AI. Systems classified under the AI Act as “low-risk” or “limited-risk” are appropriately labelled, documented, and managed. For any functions that may qualify as “high-risk systems” under the AI Act, we conduct a comprehensive assessment and deploy only solutions that fully meet all regulatory requirements, including risk management, transparency obligations, training-data quality standards, technical robustness, and human oversight.

Transparency is one of our core principles. We clearly explain where AI is used, which tasks are automated, and where the limits of the technology lie. Users must be able to recognise whether an interaction or process is supported by AI. Where content is generated or modified automatically, this is clearly indicated. AI-generated recommendations and outputs are presented in a way that remains understandable, explainable, and contextually interpretable at all times.

Information security is an integral part of all AI-supported functions. Models, interfaces, and data flows are designed to prevent unauthorised access. Model integrity and data integrity are continuously monitored so that manipulation or unauthorised interference can be detected at an early stage. Security updates, model reviews, and continuous quality controls are established components of our operational processes.

We also consider the potential impact of AI on learning environments and educational contexts. For educational institutions, we provide clear and accessible explanations that enable learners and educators to understand how AI-supported functions operate, which data is processed, and how human oversight is ensured. Transparent communication is intended to foster trust and strengthen digital and media literacy. In these contexts, AI is used strictly as a supportive tool, not for assessment, discipline, or surveillance.

For public authorities and SMEs, we provide additional information on compliance, data security, and operational risks. We actively support the integration of our systems into existing administrative and security structures and ensure that the use of our AI solutions is documented, auditable, and legally robust. Our processes are designed to efficiently meet both internal compliance requirements and external regulatory obligations.

The development, monitoring, and continuous improvement of our AI solutions follow the principle of continuous improvement. Changes in legal frameworks, new requirements arising from the AI Act, technological developments, and feedback from users are continuously incorporated into our development processes. Our models and procedures are regularly reviewed, evaluated, and updated to ensure sustained performance, transparency, and security.

We stand for a responsible, human-centred approach to artificial intelligence. AI is intended to support, not replace. It should reduce workload, not exert control. And it should improve decisions, not influence them unnoticed. We invite our customers, partners, and users to ask questions, raise concerns, or request further information. For us, transparency is not merely a legal requirement, but a fundamental principle of our actions.

Transparent AI builds trust because decisions, processes, and limitations remain understandable. This traceability is also a prerequisite for secure systems, as only comprehensible models can be effectively monitored, audited, and protected. Information security ensures that transparent AI is not only explainable, but also reliably protected against misuse, manipulation, and data loss.


Frequently Asked Questions (FAQ)

Why does your company use AI?
We use AI to make processes more efficient, improve analyses, and support users in making well-founded decisions. AI is applied only where it serves a clearly defined purpose, creates tangible value, and delivers explainable results. Its use is never an end in itself, but always focused on user benefit.

How can I identify where AI is used in your products?
We make the use of AI transparent at all relevant points. Functions based on AI models are clearly labelled and described in understandable terms. Users can always see which tasks are supported by automation and which remain under human responsibility.

Which data is used for AI-supported functions?
Only data required for the specific purpose is processed. Personal data is handled strictly in accordance with the GDPR and, where possible, anonymised or pseudonymised. Data sources, model foundations, and processing activities are documented to ensure traceability.

Are personal data used to train AI models?
Personal data is not used to train proprietary AI models without an explicit legal basis, transparency, and strict security measures. Models are preferably developed using anonymised, synthetic, or data-minimised approaches. Training data is subject to defined retention periods and access restrictions.

How do you ensure AI systems do not produce incorrect results?
AI models are tested, validated, and reviewed before deployment. During operation, output quality is continuously monitored. Results that may lead to misinterpretation are clearly flagged. Where necessary, human oversight is an integral part of the process.

How do you handle risks and potential errors?
Risks are systematically assessed and documented, including data-protection risks, model bias, interpretation errors, and potential impacts on users or affected parties. Identified risks are mitigated through technical or organisational measures, and users are informed transparently about relevant residual risks.

What role does the EU AI Act play in your AI development?
Our AI-supported functions are aligned with the requirements of the EU AI Act. Systems are assessed according to their risk level and documented accordingly. For potential high-risk functions, we apply enhanced scrutiny covering input data, model robustness, transparency, human oversight, technical security, and risk management.

Can AI make fully automated decisions in your systems?
No. Decisions with legal or materially significant organisational impact are not taken fully automatically. AI provides support, recommendations, or analyses, but responsibility for critical decisions remains with qualified professionals.

How are AI models monitored and updated?
Models are subject to defined quality, security, and update processes. User feedback, technical advances, regulatory changes, and new insights are incorporated into regular reviews. Models showing deficiencies are revised or replaced.

How do you ensure information security in AI systems?
AI models, interfaces, and data flows are protected against unauthorised access. We apply encryption, secure infrastructure, modern protection mechanisms, and monitored access controls. Manipulation attempts or attacks on data integrity are detected early through monitoring and logging.

Who has access to data and AI models?
Access rights are clearly defined and limited to what is strictly necessary. Only authorised personnel with assigned roles may work with sensitive data and AI systems. External service providers are involved only if they demonstrably meet appropriate security and data-protection standards.

How do you handle AI-generated content?
Automatically generated content is identified as such where professionally or legally required. Users can recognise whether a text, analysis, or recommendation is AI-supported. Content with legal or security relevance is always reviewed by qualified professionals.

How is AI used responsibly in educational settings?
In educational contexts, AI is used exclusively as a supportive tool. It serves to facilitate learning processes, improve understanding, or present content in a structured way. AI is not used for monitoring, control, or assessment of learners. Educators receive clear explanations of how AI functions work, which data is processed, and what limitations apply.

How are public authorities supported in using your AI solutions?
We provide public bodies with structured information on AI risks, data security, compliance, and documentation requirements. Integration into existing administrative and security frameworks is actively supported. Our AI systems are designed to be traceable, auditable, and compliant with regulatory and internal governance requirements.

How can I obtain further information about your AI systems?
We are always available to answer questions regarding AI usage, data processing, or systemic risks. On request, we provide technical documentation, compliance information, or detailed explanations. Our goal is to ensure transparency and strengthen trust in our AI-supported solutions.


Further Information and References