Logo

Volver

The AI EU Act

The EU AI Act Is Now in Force: Are Your Tech Providers Ready to Comply?

The AI EU Act

The recent enforcement of the European Union Artificial Intelligence Act marks a turning point as the world’s first comprehensive and binding legal framework on AI. The regulation introduces a risk-based approach, categorizing AI systems into four levels of impact: prohibited, high, limited, or minimal.

According to the regulation, the financial sector is classified as high risk due to its use of AI systems that influence creditworthiness, risk scoring, or customer profiling. This designation brings strict obligations for transparency, governance, explainability, and regulatory oversight.

Key Implications of the EU AI Regulation for the Financial Sector

  • Explainability, traceability, and human oversight: AI models must be understandable, auditable, and subject to human supervision. Financial institutions must document their algorithms and justify each relevant automated decision. Human-in-the-loop processes are mandatory for critical use cases. 
  • Explicit bans: The regulation prohibits manipulative AI applications, social scoring, or profiling that causes unjustified adverse outcomes—such as denying credit based on irrelevant personal data. 
  • Enhanced data protection: AI governance must align with GDPR and EU data protection laws, including impact assessments for sensitive data, ensuring privacy and fundamental rights. 
  • Obligations for both providers and deployers: Banks and financial institutions must maintain an up-to-date AI system inventory, ensure ongoing performance, and implement risk management policies to demonstrate compliance. 
  • Transition period and enforcement: The regulation includes a phased implementation of 24 to 36 months, with penalties of up to €35 million or 7% of global annual turnover, whichever is greater. 

The sector’s new challenges demand solutions that meet business needs while committing to the security and trust required by users and regulators.

Responsible Innovation for Modern Banking: An AI Engine under the New European Framework

Latinia’s Intelligent Engines have been operating for over a decade with real-time financial rule engines, governed by explainability models. From our origins, our product technology has been based on explainable AI, built on deterministic and understandable rules.

On this foundation, we now take a step forward with a neuro-symbolic approach that merges the generative capability of deep models with the efficiency and traceability of symbolic systems.

Our experience developing Intelligent Engines powered by RE (Rule Engines) and DTs (Decision Trees) — transparent by design and reinforced with a native XAI (Explainable AI) layer — facilitates compliance with AI GRC (Governance, Risk & Compliance) requirements and enables real-time audits as demanded by supervisory authorities.

Most importantly, it lays a robust and scalable foundation for the sustainable integration of neuro-symbolic models, capable of combining the power of deep learning with the transparency and efficiency of Latinia’s real-time AI.

In today’s context — shaped by the entry into force of the new European Artificial Intelligence Regulation and requirements such as GDPR, DORA, and EBA — this technical model becomes even more relevant:

  • It allows every automated action to be clearly and properly documented
  • It facilitates continuous auditing by expert personnel
  • It aligns with key principles such as privacy by design, data sovereignty, and human control over critical decisions

The modular architecture of Latinia’s products, built around its LIMSP© core, provides a solid foundation to ensure scalability, operational resilience, and regulatory compliance in demanding financial environments. These systems not only process events in real time but also ensure service continuity, permanent monitoring, and the ability to respond to technology incidents.

In this context, Latinia is working on the progressive and controlled integration of Intelligent Agents, under strict data governance and security standards.

This technological evolution is driven by Latinia LAB, our applied innovation space where every advancement is tested and validated under ethical, technical, and regulatory criteria. In this new scenario, AI acts as a fast, expert assistant — yet always subordinate to the supervision and final responsibility of human experts.

The BETA version of these AI Agent-based functionalities will be available for deployments on version 04.25.01 and above, reinforcing Latinia’s commitment to safe, explainable innovation fully aligned with the European regulatory framework.

Categories: Cloud & Tech, Security & Compliance

Shall we discuss how we can collaborate together?

We are by your side to contribute more to your business

gente
Get started