Definition of AI Governance

AI Governance Defined

AI Governance Defined

What is AI governance?

The governance of artificial intelligence systems is not a standalone discipline; rather, it’s an integral component of overall corporate governance. The difference between artificial intelligence (AI) governance and corporate governance is the resources being leveraged, for example, computing technologies, energy and personal data to achieve business objectives and meet stakeholder expectations. Both include requirements for accountability, transparency, fairness, privacy, ethical behaviour, risk management, data integrity, cybersecurity, and social responsibility. 

The ISO 38500 standard "Corporate governance of information technology" - adapted for the governance implications of the use of artificial intelligence by organisations, can be rewritten to define artificial intelligence governance as "The system by which the current and future use of AI is directed and controlled. Corporate governance of artificial intelligence systems involves evaluating and directing the use of artificial intelligence systems to support the organisation and monitoring this use to achieve plans. It includes the strategy and policies for using artificial intelligence system within an organisation".

Various descriptions of AI governance exist. Certification criteria make it as easy as possible for controllers, processors, manufacturers, providers,  distributors, users, governance, risk and compliance professionals to determine which requirements they must meet, recommend or make improvements, and verify that the processing activities indeed meet the requirements. Certification schemes developed in conjunction with the European supervisory authorities provide clarity. The certification criteria reflect the precise requirements and principles. Assertions of conformity with the certification criteria require supporting documentation and evidence that can be used to demonstrate compliance. 

Domain 1: Purpose for Artificial Intelligence Systems

Description: The governing body shall ensure that the use of artificial intelligence systems is aligned with the reason for its existence and an articulated purpose (Article 8, EU AI Act). The parameters for pursuing this purpose shall be based on a clearly defined set of organisational values and define the organisation’s intentions towards the natural environment, society and the system’s stakeholders. These values include responsibility, trustworthiness, human control, transparency, democracy and processing personal data with a legal basis, lawful in accordance with applicable legislation, fairly and transparently, collecting it only for specific, explicit and legitimate purposes, and without further processing it in a way that is incompatible with those purposes or beyond the affected individuals’ reasonable expectations.

Domain Objectives:

  • Create clarity for the artificial intelligence system’s stakeholders on the organisation’s intentions, behaviours, decisions and activities concerning the stakeholders.
  • Provide stakeholders with an understanding of the artificial intelligence system's liable operator.
  • Create a point of reference for efficient and agile artificial intelligence system-related decision-making.
  • Provide a framework within which artificial intelligence system plans are created and executed in a focused manner, avoiding unnecessary distractions.
  • Enact organisational values, which provide the foundation for the organisation’s culture affecting the use of artificial intelligence systems.
  • Provide the governing body with a basis on which to define the artificial intelligence system's value that the organisation aims to generate for its stakeholders and the manner for doing so.
  • Provide a basis on which stakeholders can assess the artificial intelligence system’s outcomes and the achievement of stated objectives.

Key Practices:

  • Understand the significant benefits to the organisation of artificial intelligence systems strategically.
  • Understand the significant risk to the organisation and the potential for harm to its stakeholders.
  • Recognise there are additional obligations for the organisation when using artificial intelligence systems.
  • Document the artificial intelligence system’s purpose and scope of the artificial intelligence system’s activities.
  • Assess the positive and negative impact of the artificial intelligence system and its purpose on the governing body's risk appetite.
  • Define organisational values to guide the development and use of artificial intelligence systems.
  • Commit to the lawful purposes for the artificial intelligence system and proposed values.
  • Identify the operator (i.e. provider, user,  authorised representative, importer and distributor) responsible for making available, placing, or putting into service the artificial intelligence system, and the entity's organisational values and expected ethical behaviour.

 

Domain 2: Value Generation

Description: The governing body shall optimise the value to the stakeholders from investments in business processes, artificial intelligence systems and assets. It should clarify the artificial intelligence system’s value generation objectives such that they fulfil the system and organisational purpose in accordance with the organisational values and intentions towards the natural environment, social and economic context within which it operates. It shall monitor that these value creation objectives are met using a clear and transparent value generation model that defines, creates, delivers and sustains appropriate value.

Domain objectives:

  • Define value generation objectives such that they fulfil the organisational purpose.
  • Sets parameters, within which the value generation objectives are to be achieved.
  • Ensure value generation objectives are delivered.
  • Ensure that value-generation objectives remain viable (protected) over time.

Key Tasks:

  • Set artificial intelligence system objectives that will result in the realisation of the organisational purpose.
  • Align artificial intelligence system value generation with organisational purpose.
  • Enable artificial intelligence systems to add value to the business and mitigate risks.
  • Incorporate artificial intelligence systems in business processes in secure, sustainable manner.
  • Ensure the artificial intelligence system value proposition is proportional to the level of investment.
  • Establish an interactive control framework for the delivery of value from artificial intelligence system investments.
  • Ensure artificial intelligence system benefit realisation for all stakeholders.

 

Domain 3: Strategy

Description: The governing body shall direct and engage with the artificial intelligence system strategy, in accordance with the value generation model, to achieve the artificial intelligence system purpose, fulfil its regulatory compliance obligations and enable data subject rights. 

Domain objectives:

  • Provide strategic direction, and set the strategic outcomes expected from the artificial intelligence system.
  • Establish governance policies.
  • Engage and steer the strategy.
  • Implement privacy by design and default.

Key Tasks:

  • Align artificial intelligence system strategy with organisational purpose.
  • Define strategic outcomes for artificial intelligence systems.
  • Establish governance policies to guide the strategy development.
  • Engage, directly or through delegation, with strategic planning for artificial intelligence systems.
  • Review and approve plans for the translation of business requirements into efficient and effective artificial intelligence systems.
  • Ensure artificial intelligence system activities align with natural environmental sustainability objectives.
  • Oversee the implementation of the strategic plans and ensure that they can deliver the agreed strategic outcomes.
  • Require that all parties apply good artificial intelligence system governance policies, for example:
    • Fairness: Ensuring AI systems are ethical, free from bias and prejudice, and do not use protected attributes.
    • Resilience: Understanding the technical robustness and compliance of AI, its agility across platforms, and resistance against bad actors.
    • Integrity: Learning about algorithm integrity and data validity, including lineage and appropriateness of data usage.
    • Explainability: Gaining transparency through understanding the algorithmic decision-making process in simple terms.

 

Domain 4: Oversight

Description: The governing body shall oversee the artificial intelligence system’s performance (concerning the people, process, technology, and data) to ensure that it meets the governing body’s intentions. and expectations of, the artificial intelligence system, its ethical behaviour, and its compliance obligations to ensure the artificial intelligence system purpose and strategic outcomes are achieved in the intended and required manner.

Once an artificial intelligence system is deployed, the governing body shall ensure there is monitoring of the artificial intelligence system's internal controls, assurance and transparency processes, and of the behaviour and responses of the system and finetune the behaviour to produce accurate responses.

Domain objectives:

  • Oversee whether the organisational values and governance policies are effectively guiding the development and use of artificial intelligence systems, associated culture and ethical behaviour.
  • Require those to whom the governing body has delegated to provide timely and accurate reports on all material aspects of the management of the artificial intelligence system.
  • Ensure that an internal control system is implemented for the artificial intelligence system, including a risk management system, a compliance management system and a system of financial controls.
  • Take corrective action.
  • Assure itself (the governing body) of the accuracy of reports and evidence it receives, and the effectiveness of the internal control system.

Key Tasks:

  • Define the desirable behaviour for artificial intelligence systems.
  • Align with stakeholder expectations.
  • Promote an ethical culture and responsible use of artificial intelligence systems.
  • Prevent prohibited artificial intelligence system practices.
  • Ensure delegated responsibilities for artificial intelligence systems are performed as required.
  • Ensure artificial intelligence system-related decisions are made within the delegated authority.
  • Ensure artificial intelligence systems conform to principles of good governance.
  • Establish an adequate and effective system of internal control.
  • Institutionalise commitments, expectations and guidance.
  • Enable human agency and autonomy.
  • Manage artificial intelligence system-related resources.
  • Identify and develop the required artificial intelligence system personnel competencies.
  • Develop, implement and maintain efficient, effective and sustainable artificial intelligence systems.
  • Ensure artificial intelligence system data integrity and availability,
  • Validate outsourced service provider conformance with obligations.
  • Ensure artificial intelligence systems resilience.

 

Domain 5: Accountability

Description: The governing body shall demonstrate its accountability for the artificial intelligence system's lawfulness, trustworthiness, fairness, integrity, effectiveness, efficiency, resilience, explainability and acceptable use to the stakeholders, maintain technical documentation (Article 11, EU AI Act), keep records (Article 2 EU, AI Act) and hold to account the operators to whom it has delegated. It shall ensure establish an accountability framework for developers, providers and deployers, and ensure all regulatory compliance and data breach reporting obligations are fulfilled.

Domain Objectives:

  • Demonstrate accountability.
  • Prepare technical documentation.
  • Keep records.
  • Hold operators to account.

Typical Accountability Mechanisms:

  • Clarify the responsibilities and accountabilities.
  • Implement accountability mechanisms for increasing the trustworthiness of artificial intelligence systems.
  • Assign responsibility and delegate authority in the development, deployment, and use of artificial intelligence systems.
  • Approve the decision-making process for the development, deployment, and use of artificial intelligence systems,

 

Domain 6: Stakeholder Engagement

Description: The governing body shall establish a process of stakeholder engagement that the organisation can use to understand the stakeholders' expectations and requirements for value creation and acceptable use of artificial intelligence systems can be considered and incorporated into its organisational policies and practices. 

Domain objectives:

  • Identify all relevant artificial intelligence system stakeholders within and outside the enterprise.
  • Establish and maintain positive relationships with the artificial intelligence system stakeholders.
  • Comply with the legal requirements for reporting and communicating the artificial intelligence system use to different stakeholders.
  • Monitor whether the requirements of different artificial intelligence system stakeholders are met.

Typical Activities:

  • Maintain a process for artificial intelligence system stakeholder engagement.
  • Establish clear criteria to determine the relevance of artificial intelligence system stakeholder expectations.
  • Assess and prioritise the expectations and needs of the artificial intelligence system stakeholders.
  • Maintain channels and formats for communication with external and internal artificial intelligence system stakeholders.
  • Implement mechanisms to ensure that information meets all criteria for mandatory artificial intelligence system reporting requirements.
  • Incorporate stakeholder feedback into the organisational policies and practices for artificial intelligence system use.
 

Domain 7: Leadership

Description: The governing body shall direct the artificial intelligence system ethically and effectively, demonstrate commitment to the internal control system and assurance processes, behave in a manner consistent with the defined organisational value, follow the expectations as set, and ensure such leadership throughout the artificial intelligence system's life cycle.

Domain objectives:

  • Properly function as a governing body.
  • Promote an ethical culture and responsible use of artificial intelligence systems.
  • Demonstrate ethical and effective leadership.
  • Deliver the performance expected, or make the changes necessary to deliver the expected performance.
  • Conform to principles of good corporate governance.

Key Tasks:

  • Provide individuals with a collective sense of belonging.
  • Assist in reconciling strategic dilemmas by creating organisational.
  • Contribute to the prevention of misconduct.
  • Act in good faith and in the best interest of the organisation.
  • Disclose actual, potential or perceived conflicts of interest.
  • Act ethically and in a compliant manner.
  • Set the tone at the top for the organisation.
  • Recognise failures and mistakes and take appropriate action.
  • Take steps to become appropriately informed of all aspects of the organisation.
  • Act with due care, skill, diligence and loyalty.
  • Be open about decisions and activities that affect the natural environment, society and the economy, and be willing to communicate these in a clear, accurate, timely, honest and complete manner.
  • Ensure that diversity and inclusion are understood and incorporated into all organisational decision-making.
  • Balancing short-term imperatives with long-term resilience.
  • Resolve competing stakeholder priorities.

 

Domain 8: Data and Decisions

Description: The governing body shall recognize data are a valuable resource for artificial intelligence systems, and that different classes of data and types of processing bring different levels of risk that the governing body should understand and direct management on how to manage these risks. It shall ensure the risks posed by the potential lack of accuracy are mitigated by data governance and management practices (Article 10 EU AI Act).

When personal data is processed and used for decision-making by artificial intelligence systems, it must be lawful, ethical, responsible, effective and accessible. The governing body shall limit an artificial intelligence system's collection, sharing, aggregation, retention and further processing of personal data only to what is necessary to fulfil the legitimate identified purpose(s), and ensure personal data is not collected and processed indiscriminately.

Domain Objectives:

  • Ensure effective decision-making.
  • Recognise data as a strategic resource.
  • Ensure responsible data use.

Key Tasks:

  • Promote data governance:
    • Design choices.
    • Data collection controls.
    • Data preparation controls.
    • Assurance against bias.
    • Data availability, quality, quantity and suitability.
    • Anonymised training data, accessible open source or proprietary training data.
    • Protect intellectual property.
    • Prevent safety and environmental harms.
  • Ensure data protection:
    • Lawful, fair and notification.
    • Purpose limitation.
    • Data minimisation.
    • Data, processing and algorithm accuracy.
    • Prevent data breaches, inaccuracies, bias, unfairness, disinformation, manipulation, infringement, surveillance and other harms.
    • Assess the conformity of data processing with legal and ethical standards.
    • Manage records and dispose of data securely.

 

Domain 9: Governance of Risk

Description: The governing body shall consider the effect of uncertainty on the artificial intelligence system's purpose and associated strategic outcomes. When determining the artificial intelligence system strategy, the governing body shall determine the nature and extent of the risks that the organisation is prepared to face in achieving its goals, how to oversee the appropriate management of that risk and any mitigation that is necessary to ensure that risk appetite is not exceeded by management, and how to ensure that the enterprise’s risk appetite and tolerance are understood, articulated and communicated using a risk management system (Article 9, EU AI Act).

Domain objectives:

  • Set the tone for managing artificial intelligence system risk.
  • Practice effective artificial intelligence system risk management.
  • Oversee artificial intelligence system risk management.

Key Tasks:

  • Establish an organisational risk management framework.
  • Risk appetite: setting, measuring and communicating.

 

Domain 10: Social Responsibility

Description: The governing body shall ensure that artificial intelligence system-related decisions and activities are transparent and aligned with broader societal expectations. The artificial intelligence system must perform in a socially responsible way by operating within the parameters of acceptable behaviour and not allow actions that are legally or locally permissible but not in line with what is expected of it by its broader stakeholders and society concerning human rights, inclusion and diversity, unfair bias, the natural environment and democracy.

Domain objectives:

  • Ensure socially responsible purposes for using artificial intelligence systems are defined.
  • Implement strategies and techniques to avoid unfair bias, discrimination and exclusion in artificial intelligence system use.
  • Apply user-centric and human-rights-based approaches to artificial intelligence system design, development and deployment.
  • Monitor the impact of artificial intelligence systems on work and democracy.
  • Adopt practices to limit a negative impact on the environment.

Key Topics:

  • Ensure that the expectations of stakeholders are clearly understood; this includes continually engaging relevant stakeholders through an engagement process and a highly developed approach to accountability.
  • Ensure that issues and opportunities affecting stakeholder expectations are identified and articulated.
  • Ensure that the artificial intelligence system purpose expresses the organisation’s approach to stakeholders.
  • Engage with all relevant stakeholders when determining and reviewing the organisational values and promote the organisational values to stakeholders.
  • Engage with all relevant stakeholders when establishing and reviewing governance policies.
  • Ensure that the organisation is acting responsibly because laws often lag behind social expectations and usually set only minimum acceptable standards
  • Steer the artificial intelligence system such that its decision-making and activities are consistent with the artificial intelligence system purpose, organisational values and governance policies, including considering how stakeholders can report a breach in behaviour (e.g. via whistleblowing).
  • Measure performance against objectives related to socially responsible behaviour.
  • Report the organisation’s social responsibility objectives clearly and transparently so that stakeholders can understand these objectives, how they are being met by the artificial intelligence system, what performance is being achieved against them, and provide the necessary evidence to support such claims.

 

Domain 11: AI Viability and Performance Over Time

Description: The governing body shall ensure that the artificial intelligence system remains viable (concerning the broader social, economic and environmental goals), performs as expected over time, without compromising the ability to meet the needs of current and future stakeholders, and the organisation ensures the artificial intelligence system is protected and restorable. It shall identify and require monitoring of the key value generation artificial intelligence system's value generation, inter-relationship between systems, use over time, performance, ethical behaviour, and compliance with obligations.

Domain objectives:

  • Define the social, economic and environmental goals.
  • Specify the expected performance.
  • Articulate an integrated view of value generation.
  • Assess the artificial intelligence system's internal and external system relationships.
  • Govern the artificial intelligence system viability (protect and restore the value generation model) over time.

Key Tasks:

  • Identify key value-generation artificial intelligence systems and resources.
  • Identify artificial intelligence system opportunities and align changes in artificial intelligence systems with changes in business strategic needs.
  • Set, measure, and communicate performance intentions and expectations.
  • Judiciously manage artificial intelligence systems risks, including those that can impact the natural environmental, social and economic systems.
  • Monitor achievement of artificial intelligence system objectives and value creation.
  • Identify and act on opportunities for improving the performance of the artificial intelligence systems.
  • Monitor long-term viability (natural environmental impact).
  • Review resource interrelation, utilisation, protection, and restoration.
  • Ensure the organisation protects and, when necessary, restores the key resources and systems that it depends on.
  • Maintain sustainable performance (operational resilience).
  • Receive periodic independent assurance on the effectiveness of the organisation’s and outsourced service providers’ artificial intelligence system arrangements.
 
Guru

The Data Protection Systems is a leading provider of IT and AI governance services, privacy-enhancing solutions and training.

Follow us:

  •  
  •  
  •  
  •  

Useful Links