ICT Control Testing Methodology

ICT Control Testing Methodology

Regulatory Requirement

Financial entities shall implement a digital operational resilience testing programme that includes defined methodologies, practices, procedures, and tools for the testing of ICT controls. The programme must provide assurance over the effectiveness of ICT safeguards, detect weaknesses and deficiencies, and ensure that remediation actions are validated and closed. The methodology must govern the entire testing lifecycle: planning, execution, issue management, remediation, and internal validation.


Core Components of the ICT Control Testing Methodology

Integration into the Digital Operational Resilience Testing Programme

The methodology forms the foundational layer of the testing programme and must:

  • Establish a structured, risk-based approach to test ICT controls across systems, processes, environments and third-party dependencies,
  • Apply the full suite of tests listed in Article 25(1) DORA, including vulnerability scans, network security assessments, source code analysis (where feasible), scenario-based testing, end-to-end testing and penetration testing,
  • Ensure that test selection and depth are proportionate to criticality, data classification levels, system importance and business impact.

Testing Scoping Principles

The methodology shall define how test scopes are determined, ensuring that:

  • ICT assets and functions supporting critical or important functions receive enhanced scrutiny,
  • All relevant ICT controls (preventive, detective, corrective) are assessed at least within a rolling cycle,
  • Dependencies on ICT third-party service providers are included, especially where contractual arrangements underpin critical functionality,
  • Test coverage aligns with BIA outputs, impact tolerances and ICT risk assessment results.

Testing Techniques and Methods (Article 24(2) DORA)

The methodology must specify the structured use of:

  • Control design effectiveness testing,
  • Operating effectiveness testing,
  • Automated and manual assessment techniques,
  • Red-team style and TLPT-aligned methods where applicable,
  • Simulation and scenario-based testing (cyber-attack, failover, data corruption, capacity degradation, behavioural anomalies),
  • End-to-end transaction flow validation across systems and service providers,
  • Performance and load testing,
  • Compatibility testing following ICT change deployments.

Each method must include documented objectives, procedures, entry/exit criteria, resource requirements and expected outputs.

Classification and Prioritisation of Identified Issues (Article 24(5) DORA)

The methodology must define a uniform scheme for classifying test findings, including:

  • Severity levels (e.g., critical, high, medium, low),
  • Business impact, likelihood and exploitability factors,
  • Alignment with risk tolerance and impact tolerances under Article 6(8),
  • Mandatory escalation rules for critical weaknesses and control failures affecting critical or important functions.

Prioritisation rules must ensure that:

  • Critical issues require immediate remediation or compensating controls,
  • High-risk issues receive time-bound mitigation,
  • Medium/low-risk issues follow standard remediation cycles but require closure evidence.

Classification must be consistent across ICT, cyber-security, operational risk and internal audit.

Remediation Governance

The methodology must prescribe:

  • How remediation plans are created, approved, tracked and verified,
  • Documented root-cause analysis requirements for all significant weaknesses,
  • Clear ownership for remediation actions,
  • Integration with ICT change management procedures,
  • Evidence standards for closure (e.g., screenshots, configuration logs, deployment evidence, test results),
  • Senior management oversight for overdue or unremediated critical items.

Internal Validation Methodologies (Article 24(5) DORA)

The methodology must include independent internal validation to confirm that remediation has fully addressed the identified issues. Validation must:

  • Be performed by independent functions (risk management, internal audit, or second line),
  • Use a structured retest approach replicating original test conditions,
  • Confirm effectiveness of new or corrected controls,
  • Document validation outcomes and residual risks,
  • Ensure that issues are not closed without evidence-based assurance.

Validation rules must explicitly prevent:

  • Self-certification,
  • Closure without testing,
  • Acceptance of incomplete mitigations.

Documentation Standards

The methodology must require:

  • Detailed test plans, scripts, steps and acceptance criteria,
  • Evidence of test execution,
  • Recording of deviations, defects and exceptions,
  • Formal test reports summarising results, findings, severity, recommendations and residual risks,
  • Maintenance of an audit trail aligned with the ICT risk management framework and the RoI (for third-party services).

Linkage to ICT Governance and Risk Management

The methodology must integrate with:

  • ICT risk assessment processes (Article 3 RTS RMF),
  • ICT change management (Article 17 RTS RMF),
  • ICT business continuity and ICT response/recovery plans (Articles 25–26 DORA),
  • Third-party risk and concentration risk management (Article 28 DORA and RTS TPPol),
  • ICT internal audit and follow-up processes (Article 6(6) and (7) DORA).

Tests must be updated in response to:

  • Threat-landscape changes,
  • New vulnerabilities,
  • Major ICT incidents,
  • Changes in technology, systems or architecture,
  • Supervisory instructions.

Continuous Improvement and Lessons Learned

The methodology must establish mechanisms to ensure:

  • Systematic review of testing effectiveness after each major cycle,
  • Incorporation of lessons from incidents, audits, TLPT exercises and threat intelligence,
  • Updating of methodologies to reflect leading practices and new regulatory expectations.

Purpose and Supervisory Significance

Supervisors assess whether the ICT control testing methodology:

  • Provides assurance over the effectiveness of controls protecting critical or important functions,
  • Is risk-based, comprehensive and up-to-date,
  • Effectively identifies, classifies and resolves weaknesses,
  • Ensures independent verification prior to issue closure,
  • Demonstrates a fully operational digital operational resilience testing programme.

Weak testing methodologies are a primary cause of supervisory findings under DORA.