Lowering the “Creep” Factor:  Canada’s New Directive on Automated Decision-Making
Posted in Privacy, Technology

Organizations are implementing Artificial Intelligence (AI) in numerous ways, directly and indirectly, to save time and money. 

How can an organization ensure the AI platform or service fairly and adequately meets its needs, without being too privacy intrusive, biased, and uncertain? 

A new Canadian method for managing these issues is articulated in a new Federal Directive on Automated Decision Making, the AIA: Algorithmic Impact Assessment. The AIA has similarities to the PIAs (Privacy Impact Assessments) privacy professionals have been conducting for years.

This Directive is meant as a guide for the Federal Government’s use of AI in an ethical, responsible and transparent manner.  It is to assist in the determination of whether a particular AI can contribute to government administrative decisions.

The Directive addresses “Automated Decision Systems”, which include any information technology designed to either assist or replace the judgment of a human decision maker. The Directive provides for an AIA to be completed prior to the production of any Automated Decision System to be used in federal administration. The AIA itself is an interactive questionnaire to help organizations understand and mitigate risks, including what kind of human intervention and monitoring an AI tool will require.

While the Directive only applies to federal government institutions and AI vendors for federal government institutions, other governmental and private organizations can look to this model as a standard for transparency in AI procurement. We anticipate the AIA will be adopted by other governments, establishing a common Canadian standard.

An AIA process will enable organizations to better establish trust with their clients and workforce. AIAs will help organizations explain and defend their decisions, if necessary. It is not always clear how a decision is rendered through an algorithm. If the decision is based on historical information or “big data” it will not be clear whether that is biased. These raise privacy and human rights issues which give rise to various risks to the organizations using the AI. Like a PIA, the AIA will help identify the level of risk and what sort of mitigation is appropriate. 

While few are asking what is behind AI today, those questions are coming.

Share
  • Ryan  Berger
    Partner

    Ryan Berger is a leading privacy and employment lawyer, with a primary focus on providing strategic advice to businesses and employers.

    Ryan leads the firm’s Privacy Group and routinely advises public and private sector ...

About Us

Lawson Lundell's Privacy and Data Management Blog provides updates on the most recent issues emerging in the legal and business communities. We cover a range of issues, legal developments, and new technology as they impact privacy and data management. We will focus on how organizations can protect, manage and innovate with information considering the various risks, regulatory and governance requirements.

Legal Disclaimer: The information made available on this webpage is for information purposes only. It does not constitute legal advice, and should not be relied on as such. Please contact our firm if you need legal advice or have questions about the content of this webpage. 

Editors

Authors

Topics

Recent Posts

Archives

Blogs

Jump to Page