CourtCorrect Terms & policies
Last update at: 11.12.2025 15:35:45
Introduction
1.1. Executive Summary
CourtCorrect’s Artificial Intelligence Safety Policy establishes the principles, governance, and procedures ensuring that all AI technologies are safe, transparent, accountable, and compliant with applicable UK and international regulations. This policy governs the entire AI lifecycle—from design and training to deployment and monitoring—and applies to all CourtCorrect staff, contractors, and partners. This policy also sets out CourtCorrect’s risk classification approach, documentation standards, and human oversight requirements, aligned with UK cross-sector AI principles, and supportive of our FCA-regulated clients’ model governance expectations for responsible model governance, and the applicable provisions of the EU AI Act for limited-risk AI systems.
1.2. Background
CourtCorrect commits to ensuring the safety, reliability, and ethical integrity of our Artificial Intelligence (AI) systems. This policy sets out our framework for the development, deployment, and governance of AI technologies, upholding standards that mitigate risks such as bias, discrimination, and other concerns noted by regulators such as the Financial Conduct Authority and the Prudential Regulation Authority. Our governance framework draws on industry and regulatory best practices to reduce ethical concerns, eliminate ambiguity and promote accountability across our AI systems. By embedding our core ethical principles across the organisation, CourtCorrect aims to continue driving innovation while adhering to the highest standards of safety and reliability. We review and update this policy in line with legislative and regulatory changes, as well as in light of the fast-paced nature of innovation in this area.
Core Principles
CourtCorrect’s AI principles are developed in line with guidance from the FCA, the PRA, the UK’s Department for Science, Innovation and Technology (DSIT), and the EU AI Act. Any future updates to these frameworks will be reflected in subsequent policy revisions.
2.1. Transparency and Accountability
Decisions influenced by AI systems are documented, traceable, and comprehensible to end users, auditors, and regulators. Our transparency standards include:
Meaningful Explanation: Users, clients, and internal staff must be able to access clear and understandable explanations of how AI-assisted outcomes were produced, including the key factors that influenced the system’s reasoning and any limitations inherent in the model.
Decision Documentation: AI-assisted actions and recommendations are logged in a manner that supports internal audit, regulatory inspection, and post-event review. Documentation includes input sources, relevant model versions, decision rationale summaries, and human oversight actions.
User Communication: Individuals interacting with CourtCorrect systems are informed when AI is being used, the purpose of such use, and the expected role of human oversight. Communication is designed to be accessible and non-technical, in line with the EU AI Act’s transparency requirements and the UK’s cross-sector AI principles.
Auditability: CourtCorrect maintains technical and operational records that enable effective auditing of AI behaviour, model updates, and risk mitigation measures, ensuring accountability across the AI lifecycle.
Contestability and User Challenge: Users and clients must be able to challenge, correct, or request review of any AI-assisted output. Mechanisms for escalation, appeal, or human intervention are available to ensure fair and accountable outcomes, in line with UK DSIT principles.
Learning and Iteration
AI models are thoroughly reviewed by experts to verify their authenticity and reliability, with specific measures in place for reevaluation of any updates upon model retraining.High-level summary documentation of methodologies may be shared with customers where appropriate.
Privacy and Fairness
We comply with all relevant data protection laws, and deliver fair outcomes by implementing our data protection by design approach. This includes applying data minimisation, purpose limitation, and lawful processing standards consistent with UK GDPR.
Bias-mitigation and non-discrimination
Our AI systems are regularly assessed for biases and discrimination, ensuring compliance with UK laws and FCA guidance on AI usage. Bias testing includes qualitative and quantitative assessments, along with documented corrective actions.
Safety and Robustness
CourtCorrect designs and evaluates all AI systems for robustness, resilience to misuse, and reliable performance. Safety-by-design principles apply throughout the model lifecycle, including stress testing, error handling, and appropriate safeguards to minimise foreseeable risks, in alignment with EU AI Act Article 15.
Risk Classification
CourtCorrect applies a risk-tiering process aligned with DSIT’s cross-sector AI framework, the FCA’s model risk governance expectations, and the EU AI Act risk categories (prohibited, high-risk, limited-risk, minimal-risk). CourtCorrect’s systems are currently assessed as limited-risk because they provide assistive decision-support only and do not make autonomous decisions that create legal or material effects for individuals. Mandatory human review and sign-off applies to all AI-assisted outputs. Risk classification is reassessed following every material model update, functional change, or relevant regulatory development.
Human Oversight
All AI systems must remain subject to meaningful, trained human supervision with the ability to override or escalate decisions. Human reviewers must understand system limitations, escalation pathways, and their obligation to independently validate AI-assisted outputs.
Governance
3.1. Board Oversight
The Board of Directors of CourtCorrect Ltd owns this AI Safety Policy and has ultimate oversight and governance of responsibilities for the company's efforts in relation to AI Safety. The Board is responsible for approving the risk classification of all AI systems, reviewing changes to risk tiering, and ensuring that governance processes meet FCA model governance expectations, PRA supervisory statements, DSIT principles, and the EU AI Act.
The Board is responsible for:
● reviewing and approving the AI Safety Policy;
● approving the risk classification assigned to each AI system;
● reviewing post-market monitoring findings and material incident reports;
● monitoring legislative and regulatory developments;
● ensuring that governance processes comply with UK DSIT cross-sector principles, UK GDPR, and the EU AI Act requirements applicable to Limited-Risk AI systems. AI Safety is reviewed at all monthly Board Meetings, with a specific focus on:
● the company's AI Safety Policy and any changes required;
● the regulatory, legislative and judicial landscape on AI technologies;
● the different risks associated with AI use and risk ownership within the company;
● the logging of action items for individual teams to remediate areas where the risk is deemed material or has changed;
● approval of material model updates and changes to risk classification;
● review of post-market monitoring reports and drift detection summaries.
3.2. AI Governance Responsibilities
CourtCorrect maintains internal governance arrangements to ensure that its AI systems are developed and operated in a responsible and compliant manner. Responsibility for the day-to day oversight of AI systems is assigned to the Head of AI, who is responsible for ensuring that the systems operate within their intended purpose, that appropriate documentation is maintained (including model versions, testing outcomes and known limitations), and that any material issues such as significant performance changes, anomalies or emerging risks are identified and escalated to senior management. The Head of AI also coordinates with the Data Protection Officer on matters involving personal data and oversees the approval of material model updates to ensure that appropriate human oversight mechanisms remain in place. CourtCorrect may adjust its internal governance structure or designate alternative qualified personnel where necessary to maintain effective oversight and compliance with applicable regulatory standards.
3.3. Compliance Monitoring
CourtCorrect maintains an internal compliance review process as part of its Quality Management System under the EU AI Act. This includes periodic checks of documentation, human oversight procedures, data quality controls and model governance practices,to ensure ongoing alignment with applicable legal and regulatory requirements.
3.4. Review Metrics
The effectiveness of our AI systems will be measured by their impact on operational efficiency, the quality of complaints resolution, the ability to derive meaningful insights on complaint root causes and the welfare of both consumers and employees, measured through a combination of survey, feedback, accuracy, evaluation and usage data among other relevant metrics. We review the metrics deployed to assess the effectiveness of our AI systems on an ongoing basis, taking into account both technological and regulatory developments. These metrics feed into the post-market monitoring cycle, supporting identification of drift, performance degradation, emerging risks, or fairness concerns.
3.5. Incident Reporting
Serious incidents identified by any staff member will be reported to the Board and, where required, to relevant regulators (e.g., ICO or FCA) within statutory timeframes. Incident logs are maintained for internal audits and used to update risk assessments and post-market monitoring procedures.
3.6. Post-Market Monitoring
CourtCorrect maintains a post-market monitoring system in accordance with the EU AI Act, which includes ongoing monitoring of model performance, detection of anomalies or drift, review of user feedback, and implementation of corrective actions where necessary. Monitoring activities and findings are documented and periodically reviewed by senior management.
3.7. Model Change-Management
CourtCorrect maintains internal procedures to ensure that any material updates to AI systems are assessed, documented, and monitored for safety and reliability.
Development Framework
4.1. Introduction
This section outlines how we embed our core principles and ethical guidelines throughout the development process, to ensure that our models support accurate, reliable, non-biased decision making. It also sets out our approach to data privacy, security, risk controls, and documentation, as applied throughout model design, testing, deployment and maintenance.
4.2. Assessment Methodology
4.2.1. Interactive Testing
Our policy is to vet all AI systems through a combination of automated and subject matter expert valuation by human experts. Our human experts interact with the scenario as they would in a real-life scenario, allowing for dynamic responses and the decision making pathways to be evaluated and analysed in a similar setting to that which might be experienced by our clients. This process not only provides an essential check on the AI’s outputs but also instils a layer of critical human judgement, ensuring a high degree of accuracy and relevance. This process forms part of our pre-deployment quality assurance and ensures that outputs remain contextually appropriate, accurate and safe.
4.2.2. Large Sample Consensus Analysis
In situations where there isn’t a clear right or wrong answer, multiple automated systems and human experts can be used to see where the consensus lies, thereby determining the most likely correct response. Our policy mandates periodic, comprehensive assessments of our AI systems using statistically robust, representative test datasets, to yield meaningful insights into consistency, stability and fairness.
4.2.3. Close Scrutiny Analysis
We also deploy more intensive scrutiny of individual, randomly selected cases. Firstly, we deploy the AI in a testing environment with real-time oversight from human experts who can intervene and correct the AI in case of errors or unexpected behaviour. These results are then analysed and reviewed after the event, with alterations made to source code as appropriate. Models may only be deployed after passing quality verification, bias testing, robustness checks, and formal change-control approval.
4.2.4 Human Oversight During Deployment
CourtCorrect provides introductory onboarding and usage guidance relating to AI-enabled functionality. CourtCorrect does not supervise or manage Customer personnel, and the Customer remains solely responsible for ensuring that individuals overseeing AI-assisted processes are suitably trained, competent, and follow the Customer’s internal escalation pathways.
4.2.5 Change Control and Model Updates
CourtCorrect operates a proportionate, risk-based change-control process. Material model updates, significant performance changes, or modifications affecting risk classification require formal approval, documentation updates, and regression testing before deployment. Minor adjustments follow a streamlined review and testing protocol.
4.3. Bias-mitigation and Non-Discrimination
4.3.1. Data Quality
We maintain the highest standards for data quality as part of our general commitment to fairness and equity. All departments involved in the collection, processing, storage and use of data for AI model development adhere to our Data Quality Assurance Policy. We gather information from a wide range of diverse sources, ensuring it represents a broad spectrum of demographics. This diversity is crucial in training AI systems that are more equitable and less likely to perpetuate existing biases.
4.3.2. Ethical Model Framework
To support our efforts in ensuring the quality of data used in our AI systems, we account for the possible prejudices or bias in the way variables are measured, labelled or aggregated. We further ensure the integrity of our data evaluation framework by defining appropriate objectives, and considering risks posed by model deployment. We have implemented ICO best practices to develop our approach, which we continually iterate and improve upon according to industry best practice.
4.3.3. Minimising Dataset Biases
We run targeted bias and fairness tests using both qualitative and quantitative techniques, assessing disparate error rates, distributional fairness, and sensitivity analysis. Findings feed directly into model improvement cycles. Ongoing review of our bias-testing methodology ensures that new risks are identified and mitigated as the system evolves.
4.3.4. Upstream Testing
In addition to internal bias testing that is carried out on all core features, we also benefit from upstream testing carried out directly on both closed- and open-source foundational models we deploy. CourtCorrect conducts supplier due diligence, technical evaluation and contractual review of all third-party AI and cloud providers in line with EU AI Act Arts. 26–28, and UK GDPR. This includes assessment of provider safety practices, security controls, data handling, model behaviour, and update processes. Foundation model risks are documented and monitored throughout the lifecycle.
4.3.5. Engagement With Customers
We pursue a proactive approach to AI Safety, where we encourage customers to consider how AI might support minimising bias in their own processes. All AI outputs must be subject to client human review and must not be used as sole decision-making tools.. This approach underlines our commitment to ensuring that our AI solutions are part of a positive transformation towards greater justice and equality in decision-making.
4.3.6. Technical Documentation
CourtCorrect maintains technical documentation appropriate to the nature and risk level of each AI system, including a description of the system’s intended purpose, high-level design, key model components, and summary information about training and evaluation data (at an appropriate level of abstraction). Documentation also includes an overview of testing methods, performance characteristics, known limitations and safeguards. Documentation is kept up to date following any material changes to the system and supports CourtCorrect’s internal governance obligations and transparency requirements applicable to limited-risk AI systems. Where relevant and proportionate, CourtCorrect may provide clients with high-level documentation—such as model summaries or system behaviour descriptions—to support their regulatory or operational needs.
4.4. Data Privacy
4.4.1. Data Protection by Design
Data privacy is at the core of our data protection by design approach. We comply with UK GDPR by adopting and developing technical and organisational measures to implement data protection principles effectively. This includes data masking, access controls, encryption, and privacy-preserving design choices.
4.4.2. Data Minimisation
As part of our commitment to the principles of the UK GDPR, we adhere to data minimisation standards, to prevent the unnecessary processing and retention of PII. We achieve this by ensuring that only the minimum type of data necessary for the operation of the tool's various features is present in the system. This enables us to provide rationales for the processing or storage of PII for each specific features, for example:
● Final Response Letter (FRL) Generation: We require only the complaint details, and PII such as an individual's name to whom the letter is addressed to.
● Vulnerability Flagging: We require information pertaining to customer vulnerability, e.g. life events, capability, resilience.
● Root Cause Analysis: We require complaint details, and vulnerability data where these intersect in contributing to root causes. While the system will operate well with this data, it is generally recommended for clients to provide a holistic view of any complaint to the model, including any evidence associated with the complaint. Providing such information will further enhance model performance and also enables more granular root cause analysis reporting. We work closely with our customers to establish an open line of communication, meaning we can respond to specific concerns and requests regarding data processing. This is part of our commitment to ensuring transparency in respect of the functions and processing of personal data.
4.5. Data Security
4.5.1. Penetration Testing
Penetration Testing: CourtCorrect engages Cyberis, a specialist cybersecurity consultancy, to conduct penetration tests, which serve as a form of cyber attack simulation. These tests are designed to assess and enhance the company's readiness to react to cyber threats an incidents. Penetration testing effectively simulates real-world cyber attacks, allowing CourtCorrect to evaluate its defences and response strategies in a controlled environment. This approach is a proactive measure to identify vulnerabilities and ensure that the company's cybersecurity measures are robust and effective against potential cyber threats. We conduct annual Penetration Tests.
4.5.2. Automatic Security Checks
We use Snyk to conduct ongoing automatic security checks for technical vulnerabilities. The progression of remediation efforts is monitored and documented in writing, facilitating transparency and oversight. Each identified vulnerability is assigned a deadline for resolution, directly correlating to the level of risk it poses.
4.5.3. Data Loss Prevention (DLP) Methodology
Responsibilities are allocated according to our DLP methodology, delineating clear risk ownership and accountability within the organisation. The Data Security Officer (DSO) oversees the DLP strategy, ensuring compliance with relevant legislation and reporting to the executive management. Data Custodians are responsible for implementing DLP measures within their respective domains.
4.5.4. Risk Assessment Protocol
We conduct risk assessments on a regular basis, in alignment with industry best practices and regulatory expectations, to ensure that the risk posture is continuously understood and managed. The assessment takes into account the potential impact on the confidentiality, integrity, and availability of customer data, considering threats, vulnerabilities, and existing control effectiveness. For identified risks, we develop and document action plans, which are then prioritised based on the level of risk and business impact. We subject all third party providers to risk assessment procedures that match our internal standards, and select third parties carefully to ensure compliance with GDPR and other relevant regulations.
Deployment Framework
5.1. Introduction
To ensure that our AI system is deployed effectively, we have developed a series of best practices and processes that we strictly adhere to. These processes ensure that, once our systems are live, end users are able to obtain the benefits in a safe and secure environment. To deliver the highest deployment standards, we encourage transparent reporting, continuous improvement, and provide users with expert training. Deployment activities must follow a documented change-management process, including approval, versioning, rollback capability, and post-deployment monitoring, in alignment with FCA model governance expectations and EU AI Act lifecycle requirements.
5.2. Staff Bias Training
CourtCorrect invests in ongoing training and awareness programmes for our staff involved in AI development and data handling, to ensure our team is well-equipped with the knowledge to detect and prevent bias effectively. Training also covers human oversight obligations, escalation procedures, limitations of automated systems, and responsible use aligned with EU AI Act Articles 14 and 29.
5.3. Transparent Reporting and Continuous Improvement
End users can communicate their feedback to us via established channels, which include email, LiveChat as well as directly through functionality on the CourtCorrect platform. We commit to rapid response times where there are reports of perceived bias. This feedback is then fed into a continuous improvement process, and as part of the AI Safety discussion in each monthly Board Meeting. All feedback related to high-risk systems must be logged, categorised, and included in the post-market monitoring file required under the EU AI Act.
5.4. Leading Collaboration and Research Initiatives
CourtCorrect actively engages with the broader AI and ethics community, including collaborations with esteemed institutions like the University of Cambridge. These initiatives focus on developing new methods and technologies to combat AI bias, ensuring our approaches are aligned with cutting-edge developments in the field.
5.5. End-User Training
We provide training and upskilling for customer employees adopting our software, whilst more broadly advocating for education and career advancement within the evolving AI landscape. We ensure that our customers’ employees can identify and flag issues with the AI system. Our training is designed to assist users with the proficient use of Artificial Intelligence (AI) within our customers’ organisations. This ensures our customers’ employees are well-versed in both the theoretical and practical aspects of AI applications. To provide a balanced view, training will cover both the benefits of AI integration, and the limitations and ethical considerations posed by AI. With our robust safety framework, CourtCorrect aims to align with and shape industry-wide standards and best practices in AI safety and ethics. Training materials will include explainability guidance, escalation paths, examples of acceptable and unacceptable reliance on AI, and sector-specific compliance considerations.
5.6. Public Communication and Reporting
Clear, non-technical communication of our AI safety commitment will be maintained, with regular updates provided to the public through our website and newsletters. Our customers can trust that CourtCorrect is dedicated to delivering AI-assisted outcomes that are as fair and unbiased as possible. We continuously refine our systems to uphold the highest standards of ethical AI usage, ensuring that all measures of protection against racism and bias are deeply embedded within our technology.
Miscellaneous Provisions
6.1. Regulatory Cooperation
In the event of regulatory enquiries or audits concerning AI systems supplied by CourtCorrect, CourtCorrect will coordinate regulatory communications where relevant and may request that clients notify us of any enquiries involving our systems to ensure accurate and consistent information is provided. This provision does not create contractual obligations, which are governed by the applicable service agreement.
6.2. Limitations of AI Outputs
CourtCorrect uses rigorous testing and validation processes; however, AI outputs may contain uncertainties, and all outputs must remain subject to appropriate human judgement, review, and decision-making. Any warranties, limitations of liability, or contractual terms are governed exclusively by the applicable service agreement and do not form part of this policy.
6.3. Data Sharing and Indemnity
Clients are solely responsible for ensuring that any data provided to CourtCorrect for AI model training or operation complies with applicable data protection laws, including the UK GDPR. CourtCorrect may reject, delete, or request clarification regarding data that appears inaccurate, excessive, or non-compliant with data-protection obligations. Contractual indemnities or obligations relating to data submission are governed by the applicable service agreement.
6.4. System Updates and Modification
CourtCorrect may need to update, retrain, or modify AI models and associated frameworks at any time to improve performance, safety, or compliance. Clients will be informed of material changes where relevant to system behaviour, integration, or output characteristics.
6.5. Policy Review
This policy will be reviewed at monthly Board Meetings, or more frequently when necessary to ensure ongoing alignment with UK regulatory principles, EU AI Act developments, and industry best practice. CourtCorrect also commits to aligning with the EU AI Act and will monitor relevant implementation guidance as adopted across jurisdictions.
Contact Information
Questions regarding this policy or its provisions should be directed to:
CourtCorrect Ltd.
33 Percy Street, W1T 2DF, London
hello@courtcorrect.com
(+44) 0330 1332 411