ISO 42001 - Artificial Intelligence.

Reviewed: 27th September 2024.

ISO/IEC 42001 is an emerging standard that defines requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). The goal is to provide organizations with guidelines to ensure that AI systems are trustworthy, ethical, safe, and compliant with legal and regulatory frameworks.

The standard is being developed by the ISO/IEC JTC 1/SC 42 committee, which focuses on AI and related technologies. As AI becomes more integrated into various sectors, there’s a growing need for a standard to manage AI’s risks and ensure responsible usage. ISO/IEC 42001 is designed to address these needs by establishing a systematic approach for organizations to manage the entire lifecycle of their AI systems, from design and development to deployment and monitoring.

Key Areas of ISO/IEC 42001:

  1. Governance of AI Systems: It provides guidelines for the organizational structure, roles, responsibilities, and decision-making processes related to AI management.

  2. Risk Management: It offers frameworks to identify, assess, and mitigate risks associated with AI systems, particularly regarding ethical concerns, security, and data privacy.

  3. Compliance and Accountability: It emphasizes the need for compliance with legal, regulatory, and societal expectations, ensuring AI systems are developed and deployed responsibly.

  4. Trustworthiness: The standard focuses on ensuring that AI systems are transparent, explainable, and secure, thus building user trust.

  5. Continuous Improvement: Like other ISO management systems (e.g., ISO 9001 for quality management), ISO/IEC 42001 encourages organizations to continuously monitor and improve their AI systems and management processes.

While the standard is still under development, its adoption would likely help organizations navigate the complex landscape of AI, ensuring that they are using these technologies in a responsible, accountable, and ethically sound manner.

Why is ISO/IEC 42001 important?

ISO/IEC 42001 is the world’s first AI management system standard, providing valuable guidance for this rapidly changing field of technology. It addresses the unique challenges AI poses, such as ethical considerations, transparency, and continuous learning. For organizations, it sets out a structured way to manage risks and opportunities associated with AI, balancing innovation with governance.

ISO 42001 Clauses

As of now, the ISO/IEC 42001 standard is still under development, and its specific clauses haven’t been officially published. However, based on typical structures of other management system standards like ISO 9001 (Quality Management System) and ISO/IEC 27001 (Information Security Management System), we can reasonably anticipate that ISO/IEC 42001 for Artificial Intelligence Management Systems (AIMS) will follow a similar framework. Here’s a speculative summary of the clauses it is likely to include:

1. Scope

Defines the scope and application of the standard. This section will specify the type of organizations the standard is applicable to and the processes it covers in managing AI systems.

2. Normative References

Lists references to other standards or documents that are essential for the application of ISO/IEC 42001. For instance, this may include references to ISO/IEC 27001 (Information Security) or ISO 31000 (Risk Management).

3. Terms and Definitions

Provides a glossary of key terms used throughout the standard, particularly for AI-related concepts like “transparency,” “explainability,” and “ethics.”

4. Context of the Organization

Organizations will need to understand the internal and external factors affecting their AI operations. This clause will likely require organizations to:

  • Identify the needs and expectations of stakeholders (customers, regulators, society).
  • Determine the scope of their AI Management System.
  • Understand the legal, regulatory, and societal environment for AI use.

5. Leadership

This clause will outline the role of top management in establishing and supporting an AI management system, including:

  • Assigning roles and responsibilities for AI governance.
  • Ensuring AI principles (like fairness, transparency, and accountability) are embedded in the organization.
  • Communicating the importance of ethical AI usage and ensuring alignment with the organization’s objectives.

6. Planning

The planning section will focus on assessing and mitigating risks related to AI systems. Organizations will be required to:

  • Identify AI-related risks and opportunities.
  • Set objectives for AI system performance, ethics, and compliance.
  • Plan actions to address the risks and continuously improve AI management.

7. Support

This clause will likely cover the resources, skills, and infrastructure needed to implement and maintain the AI Management System, including:

  • Competency requirements for staff managing AI systems.
  • Communication channels for internal and external stakeholders.
  • Documented information about AI policies, processes, and controls.

8. Operation

The operational clause will address the life cycle management of AI systems, including:

  • Procedures for the design, development, testing, and deployment of AI.
  • Ensuring that AI models and algorithms align with ethical and regulatory guidelines.
  • Monitoring and maintaining AI systems to ensure they perform as intended and are explainable, secure, and transparent.

9. Performance Evaluation

This section will likely cover how organizations should monitor, measure, and analyze the performance of their AI systems. It may include:

  • Internal audits of the AI Management System.
  • Regular reviews of AI systems to ensure compliance with objectives and regulations.
  • Mechanisms to assess whether AI models remain reliable, transparent, and explainable over time.

10. Improvement

This final clause will emphasize continuous improvement, requiring organizations to:

  • Take corrective actions when AI systems or management processes do not meet objectives.
  • Address incidents related to AI misuse, bias, or failure.
  • Identify areas for innovation and improvement in AI systems and management processes.

Once the ISO/IEC 42001 standard is finalized, it will likely follow this structure or a similar one, reflecting other ISO management systems. It will aim to guide organizations in ensuring that AI systems are responsibly managed, trustworthy, compliant, and continuously improved.

Everything you need for ISO 42001

Our AIMS (Artificial Intelligence Management System) is built with everything you need to show that you’re dedicated to secure AI management. It’s ready to use straight out of the box – no training required!

How it works

ISMS.online comes preconfigured with everything you need including pre-built policies, templates and controls, to help you achieve and maintain ISO 42001 requirements all in one place.

  • 70% of the work done for you thanks to ISMS.online HeadStart
  • Access pre-prepared policies for easy adoption, adaptation, and addition, expediting your policy development process
  • Receive expert guidance and support from our team whenever necessary, ensuring a smooth implementation journey.

Easy Assets, Risks and Controls

Utilise the built-in Risk Map to identify and prioritise risks and opportunities based on likelihood and impact. Select mitigation strategies for risks and integrate them into your decision-making processes.

  • Select what you need from the Asset and Risk banks
  • Quickly create your asset inventory and risk map
  • Get the controls you need suggested for you

How does it compare to ISO 23894-2023?

The key difference between ISO 42001 and ISO 23894 is scope. ISO 23894 adapts the generic risk management standards from ISO 38001-2018 to AI. ISO 42001 focuses on how an organization designs, adopts, and documents internal operating procedures to manage its AI systems.

While ISO 42001 does address risk management, it is within the broader context of policies and procedures in place throughout the organization to manage AI development and deployment, as well as for internal AI use cases. For instance, ISO 42001 requires organizations to conduct regular risk assessments and contains implementing guidance on how to effectively document AI risks. However, ISO 23894 provides more descriptive guidance on how to design an effective risk assessment. 
Shopping Cart