The national standard on ethical AI provides vital values-based expectations for organizations that are using algorithms or other technologies to automate decision making. CAN/CIOSC 101 lays out minimum requirements for protecting human values and incorporating ethics in the design and use of these systems.
The Council’s technical committee on AI ethics recently conducted a review of the national standard on automated decision systems, CAN/CIOSC 101, which was first published in 2019. The national standard provides vital values-based expectations for organizations that are using algorithms or other technologies to automate decision making. It lays out minimum requirements for protecting human values and incorporating ethics in the design and use of these systems.
The committee confirmed that the standard continues to be valid without any technical change and remains in conformance with the accredited standards development requirements.
Additionally, following receipt of a request for interpretation from a stakeholder, the committee has issued an interpretation to clarify the meaning of the term “non-locked algorithm” in the text.
This standard is subject to technical committee review beginning no later than one year from the date of publication. Any stakeholder may provide comment on a published national standard to the technical committee responsible at any time.
CAN/CIOSC 101 and its interpretation are available for download from the Standards section of our website.
Are you implementing the national standard?
In collaboration with KPMG, the CIO Strategy Council has developed an assurance program for CAN/CIOSC 101. The program enables organizations to obtain independent assurance on whether they meet the criteria in the standard and provides reporting under Canadian Standards for Assurance Engagements (CSAE 3000). The program is open to all organizations. For further information, contact Tim Bouma, Director of verification and assessments, email@example.com.