CLARENDON

Confidential AI Governance Scorecard

Irish Credit Unions and Building Societies

Irish credit unions and building societies face unique AI governance challenges. With fast approaching regulatory deadlines under the EU AI Act, demonstrating robust governance over AI driven platforms is a critical priority.

This interactive scorecard provides a confidential, high level assessment of your organisation's position. Answer the 23 questions below to generate your preliminary score.

Why this matters now (February 2026):

The Central Bank of Ireland has the power to take direct enforcement action under the IAF frameworks. With the EU AI Act high risk obligations applicable in August 2026 and CPC 2025 requirements effective March 2026, demonstrating governance over AI systems is no longer optional. DORA operational resilience requirements will apply to credit unions from January 2028.

Progress0 of 23 questions answered

Section 1: AI System Identification and Risk Classification

This section focuses on understanding the AI systems in use and their classification under the EU AI Act.

Q1. Has the organisation conducted and documented a formal inventory of all AI systems currently in use (e.g., loan decisioning, credit scoring, fraud detection, member chatbots, AML monitoring)?
Q2. Have you, in conjunction with your vendors, classified each AI system according to the EU AI Act risk categories (Prohibited, High Risk, Limited Risk, Minimal Risk) and documented the rationale?
Q3. Looking ahead to the 2028 DORA deadline, have you preliminarily identified which AI platforms would be considered critical or important ICT services, particularly those provided by third parties?

Section 2: Accountability, Governance, and the IAF

This section focuses on the actual Individual Accountability Framework (IAF) obligations that apply to credit unions and building societies.

Q4. Has the board assigned clear, documented responsibility for oversight of AI governance to a specific board member (e.g., Chair of Risk Committee) or a committee, consistent with your governance manual?
Q5. Have you assessed how the use of AI impacts compliance with the Common Conduct Standards (e.g., acting with honesty and integrity, with due skill, care and diligence) and the Additional Conduct Standards for PCF holders (e.g., ensuring the business is controlled effectively)?
Q6. Does your annual F&P certification process for PCF and CF roles now consider the competence and capability required to manage or oversee AI systems relevant to that individual's role?
Q7. Does the board receive regular, clear reporting on the performance of key AI systems, including accuracy, fairness audits, identified biases, and any significant incidents or member complaints?

Section 3: Board and Staff AI Competence

This section addresses the need for upskilling at all levels of the organisation.

Q8. Have board members received tailored training on AI governance, the specific requirements of the EU AI Act for high risk systems, and their oversight responsibilities in the context of the organisation's risk appetite?
Q9. Have senior management and other PCF holders received practical training on AI literacy and the specific risks and opportunities AI presents for their areas of responsibility?
Q10. Have member facing and operational staff who interact with AI systems been trained on the system's purpose, its limitations, and the established escalation paths for unexpected results, errors, or member challenges?

Section 4: Risk Management and Human Oversight

This section focuses on the practical controls needed to manage AI risk.

Q11. Have you conducted and documented formal risk assessments for each high risk AI system, covering areas such as data quality, potential for bias, cybersecurity, and potential for member harm?
Q12. Have you established and documented robust human oversight protocols for AI driven decisions affecting members (especially in lending), ensuring that automation does not lead to a rubber stamping of decisions without critical review?
Q13. Do you have a process to periodically test for and mitigate bias in AI systems, particularly concerning protected characteristics under Irish equality legislation, to ensure fair outcomes for all members?
Q14. Have you established procedures to track the performance, accuracy, and data drift of your AI models on an ongoing basis to ensure they remain fit for purpose?

Section 5: Third Party and Vendor Management

This section is critical for organisations that rely on external vendors for technology.

Q15. Have you reviewed all contracts for AI systems to ensure they include clauses addressing EU AI Act compliance, data processing responsibilities, liability, and the vendor's obligation to provide necessary transparency and support for audits?
Q16. Is there documented evidence of your due diligence on AI vendors, assessing not just their technical solution but also their own governance, risk management, and compliance position regarding the EU AI Act?
Q17. In preparation for DORA, have you started to build out your Register of Information on all ICT third party service providers, including your AI vendors?
Q18. Do you have a formal process to assess and approve significant changes or updates to vendor provided AI systems before they are deployed to your members?

Section 6: Documentation, Transparency, and Member Rights (CPC 2025)

This section links AI governance to the core principles of member trust and the requirements of the new Consumer Protection Code.

Q19. Have you ensured that you have access to, or have created, adequate documentation for each AI system that is accessible and understandable for review by the Central Bank?
Q20. Have you updated your member facing materials (e.g., loan application forms, privacy notices) to be transparent about the use of AI in decision making processes, in clear and simple language?
Q21. Have you established a clear process for a member to request and receive a meaningful explanation of a significant decision made about them by an AI system?
Q22. Is your complaints handling process equipped to manage and investigate complaints related to AI driven decisions, and is there a clear path for a member to appeal a decision they believe is unfair or incorrect?
Q23. Have you updated your incident response procedures to specifically include scenarios such as a high risk AI system failure, the discovery of significant bias, or a data breach related to an AI platform?

Please answer all 23 questions to calculate your result