CAN/CIOSC 101:2019 (REAFFIRMED 2021-10)
Ethical design and use of automated decision systems
This is the First Edition of CAN/CIOSC 101:2019, Ethical design and use of automated decision systems.
This Standard specifies minimum requirements in protecting human values and incorporating ethics in
the design and use of automated decision systems.
This Standard is limited to artificial intelligence (AI) using machine learning for automated decisions.
This Standard applies to all organizations, including public and private companies, government entities,
and not-for-profit organizations. It provides a framework and process to help organisations address AI
ethics principles, such as those described by the OECD:
- AI should benefit people and the planet by driving inclusive growth, sustainable development
- AI systems should be designed in a way that respects the rule of law, human rights,
democratic values and diversity, and they should include appropriate safeguards – for
example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that
people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and
potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held
accountable for their proper functioning in line with the above principles.
Requirements in this Standard are principles-based and recognize an organization’s governing practices
may depend on its size; ownership structure; nature, scope and complexity of operations; strategy; and
risk profile. Organisations are expected to take reasonable and responsible measures to adopt and
implement the principles in this Standard.
This Standard is intended to be used in conjunction and integrated with the organisation’s compliance
programs, including but not limited to, existing privacy, cybersecurity, data governance, complaints and
appeals, and legal programs.
NOTICE: The following interpretation has been approved by Technical Committee 2: Ethical design and use of automated decision systems. Interpretation to clarify the meaning of the term “non-locked algorithm”.