AI Governance & the Journey To ISO 42001

Artificial Intelligence (AI) is advancing fast and will likely be the most powerful transformative technology of this generation. Common sense tells us there will be risks to be mitigated and regulations to be complied with.
AI Governance refers to the frameworks, policies, processes and controls for ensuring that AI is developed and used ethically, transparently, and securely. It’s an important consideration if your company intends to utilise AI services internally or if AI will be integrated into your company’s products and services.
In many sectors, customers and partners are now demanding evidence of AI governance practices. If your pre-sales process typically includes requests to complete security assessment questionaires, then a similar assessment will be required for any products or services that include AI capabilities.
Let’s break this down into two phases; implementing basic AI governance, and advancing to ISO 42001 certification.
Implementing Basic AI Governance
You can implement and run an effective AI governance programme without ever getting certified. However, if you’re going to invest the time and effort, it makes sense to align with a standard, so that you have the option of certification in the future, without too much re-work.
Here are some steps to getting started:
Leadership Alignment
Any governance initiative needs c-suite buy-in to be successful. Ideally with a nominated c-suite project sponsor or AI Officer. There will be a need for some budget approval, resource allocation and change management support, so the mandate must come from the top.
Publish an AI Policy
Create and publish an AI Policy for your organisation. There are templates available online that can serve as a starting point. These will cover ethical principles etc. however you should add sections that are specific to your organisation. (Hint: use an AI chatbot to refine it!) Communicate to the organisation that the policy is in effect by email or presenting it at an all-hands meeting. Review the policy regularly and publish updated versions when changes are made.
Establish an AI Governance Group
Setup an AI Governance Group / Board / Committee. This can start small and grow later. Typical roles to include are; the business sponsor / AI Officer, head of security, head of learning & development, a legal/regulatory representative, head/director of engineering. Write up a charter to define ways of working, meet once a month and record the meeting minutes.
Update Existing Processes
Most organisations have some level of GRC (Governance, Risk and Compliance) programme. This typically includes policies and processes for managing risks from security incidents, compliance with regulations etc. Existing policies and processes related to GRC include: Risk Management, Vendor/Supplier Management, Incident Management, Change Management, Software Development Lifecycle etc. All of these should be updated to take AI considerations into account.
This is generally a light touch, for example:
- Risk Management will need to call out the types of AI risks that could affect your business, such as AI hallucinations, bias, and model drift.
- Vendor/Supplier assessments should be updated to check if the vendor will use your data to train their AI models.
Conduct AI Impact Assessments
If your company is processing sensitive data then you will likely be familiar with a DPIA (Data Privacy Impact Assessment). DPIAs help to assess the impact of any data processing activities related to people. An Artificial Intelligence Impact Assessment or AIIA is a similar concept. It contains a list of questions that help to identify risks associated with AI ethics, bias, regulations etc.
Create an AIIA template. Just like the AI policy, there are templates available online as a starting point. Add sections and questions that are specific to your organisation and use an AI chatbot to refine it!
Conduct some AI impact assessments, in collaboration with colleagues from your AI Governance Group. Start with tools and services that your teams are already using.
Publish AI Usage Guidelines
Many employees aren’t sure what AI services they are or are not allowed to use. In the absence of clear guidance, some users will proceed and use one of the popular services. This could be a risk to your organisation as sensitive data might be entered into 3rd party services, which is then used to train their next AI model.
Rather than saying no to people that want to use AI, it’s best to guide them in a safe direction. Publish an internal guide explaining which AI tools or services can be used for each use case.
For example, if your organisation already has an enterprise license with a vendor that provides AI enabled services, then list that service and typical use cases.
Advancing to ISO 42001 Certification
What is ISO/IEC 42001?
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organisations. An AIMS is composed of policies, procedures, and controls to manage AI responsibly and mitigate reputational or legal risks.
So why get certified?
Certification promotes ethical AI use and signals trustworthiness, which could differentiate your organisation from competitors. ISO 42001 processes also facilitate compliance with the EU AI Act and other emerging AI regulations.
So what’s involved?
This will depend on the maturity of your existing GRC program, and whether you have existing certifications or not.
Where should I start?
As a first step, purchase the ISO 42001 standard. While not required in the early stages, I would recommend buying the standard and reading through the requirements so that you can align subsequent efforts with the standard and avoid re-work later on.
If you're using a 3rd party GRC platform, they may offer a module for ISO 42001. This will typically include policy templates and mappings to the requirements and controls.
We already have existing certifications
If you have an existing GRC programme and are already certified with other standards such as ISO 27001 or SOC 2, then you have a head start. Many of the required policies and processes are already in place, although they will need to be extended as mentioned above. If you’re familiar with the process of getting certified and have the capacity to do it, then it is possible to build out a full AIMS without external assistance.
Create a project plan based on the clauses in the ISO 42001 standard, identify all policies, processes, and controls that need to be reviewed or implemented and then work through the plan. The time frame to be ready for an external audit will depend on many factors, somewhere between six and twelve months.
We don’t have any existing certifications
If you don’t have a mature GRC programme or don’t have any existing certifications then you will have a lot more work to do. I would recommend engaging with a third party consultancy to guide you through the process. The time frame to be ready for an external audit will depend on many factors, somewhere between twelve and eighteen months.
Engage with a Certifying Body
If you want to get certified you need to engage with an accredited certification body. This step might seem obvious but it’s called out here because it needs to be planned for.
You will need to understand the costs (request a quote), get budget approval and schedule your audit, ensuring that all required stakeholders will be available at the time of the audit etc.
Communication
Communicate widely across your organisation, using all the usual channels; email, messaging platforms, all-hands meetings etc. Make it clear what policies and processes are in place, and most importantly, why it matters to your organisation.
Driving AI Adoption
Finally, if your organisation is having challenges adopting AI, this may be of interest: Driving AI Adoption, From Resistance to Results.