The firm publishes specialized materials for the use of clients and others with an interest in particular areas of law. Please feel free to read those materials, grouped in the area on the right. However, before you do please read and make sure you understand our Terms of Use and the copyright restrictions for this site.


This past year (2018) has seen a significant change in our collective awareness of harassment and sexual aggression, primarily in the workplace. On an almost daily basis, we are learning of yet another sexual aggressor and the swift and severe repercussions of this alleged sexual misconduct. Social media has been flooded with thousands of personal experiences of harassment and sexual abuse of one form or another, showing just how commonplace these incidents are in our society. There has been what has been described as a ‘‘seismic shift” in what behaviour is tolerated in the workforce. The response to these allegations has also shaped the way people view victims who come forward. One commentator has opined that for perhaps the first time in history, powerful aggressors are falling, ‘‘like dominos”, and victims are being believed. The #MeToo movement has been described as the fastest-moving social change seen in
decades and its founders and other ‘‘silence breakers” were named Time Magazine’s 2017 Person of the Year.

This movement has started an important discourse, and society is listening and acknowledging the harm caused to vulnerable persons by sexual aggression, whether in its physical form or on an online platform in the form of cyberbullying and the non-consensual dissemination of intimate images (‘‘revenge porn”). With this new wave of support, one can anticipate that more victims will have the courage to come forward and that our civil justice
system will adapt to this new, rapidly-changing legal landscape.

This paper provides an overview and update on civil claims for sexual misconduct and revenge porn.

Please click here to download the complete publication.


Peter Murphy Head ShotOn June 15, 2022, the Government of Canada introduced Bill C-27 which proposes the first artificial intelligence (AI) systems legislation to apply in Canada, amongst other things.  If enacted, this legislation will make very severe penalties available for non-compliance.

The successor to Bill C-11, Bill C-27 reintroduces the Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (PIDPTA) in modified form.  Bill C-27 goes further by also proposing a new statute - the Artificial Intelligence and Data Act (AIDA) - to regulate the development and use of artificial intelligence (AI) systems.

AIDA will apply throughout Canada, excluding federal government institutions as defined in the Privacy Act R.S.C., 1985, c. P-21.  Additional federal and provincial government departments and agencies may be excluded by regulation.

Under AIDA, an artificial intelligence (AI) system is any technological system that, autonomously or partly autonomously, processes data related to human activities in order to generate content or make decisions, recommendations or predictions.  The meaning of “autonomously or partly autonomously”, which is not defined in AIDA, will be crucial when determining if a system is an “AI system”.

AIDA will require any person who designs or develops an AI system, makes an AI system available for use, or manages the operation of an AI system to determine if it is a “high-impact system”.  AIDA will define “high-impact systems” in forthcoming regulations. 

If the AI system is a high-impact system, the person will be required to establish measures:

  • to identify, assess and mitigate the risks of harm or biased output (as defined in AIDA) that could result from the use of the AI system, and
  • to monitor compliance with such measures and their effectiveness.

The person will also be required to notify the designated Minister, as soon as feasible, if the use of the high impact system results in, or is likely to result in, material harm.

Under AIDA, each person who makes a high-impact system available for use or who manages the operation of a high-impact system will be required to publish on a publicly available website a plain-language description of the high impact system, including:

  • an explanation of how the system is used or intended to be used,
  • the types of content that it generates or is intended to generate,
  • the decisions, recommendations or predictions that it makes or is intended to make,
  • the mitigation measures established to identify, assess and mitigate the risks of harm or biased output that could result from the use of the system, and
  • any other information that may be prescribed by regulation.

AIDA also applies to the following regulated activity if it is carried out in the course of international or interprovincial trade and commerce:

  • processing, or making available for use, any data relating to human activities for the purpose of designing, developing or using an AI system; or
  • designing, developing or making available for use an AI system or managing its operations.

Under AIDA, anyone who carries out regulated activity and who processes anonymized data or makes anonymized data available for use in the course of regulated activity will be required to establish measures with respect to how the data is anonymized and the use or management of the anonymized data.

Each person who carries out a regulated activity will be required to keep records describing, in general terms, the measures they have taken as required by AIDA, including measures they have taken with respect to a high impact system and the reasons supporting their assessment as to whether their AI system is a high impact system. 

It is important to note that a person is not to be found guilty of an offence for violating the requirements outlined above if they establish they exercised due diligence to prevent the offence.

AIDA will give the applicable Minister powers to obtain copies of records required to be maintained under AIDA, to conduct audits with respect to possible contraventions of AIDA, and to make certain rectifying orders. 

In addition to a breach of the requirements outlined above, AIDA provides that it is an offence to possess or use personal information for the purpose of designing, developing, using or making available for use an AI system, while knowing or believing that the information is obtained or derived, directly or indirectly, as a result of:

  • the commission in Canada of an offence under federal or provincial law; or
  • an act or omission anywhere that, if it had occurred in Canada, would have constituted such an offence.

Further, every person will be considered to commit an offence under AIDA if the person:

  • without lawful excuse and knowing that or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property, makes the artificial intelligence system available for use and the use of the system causes such harm or damage; or
  • with intent to defraud the public and to cause substantial economic loss to an individual, makes an artificial intelligence system available for use and its use causes that loss.

Organizations that violate AIDA’s statutory requirements may face a fine of up to the greater of $25,000,000 and 5% of the organization’s gross global revenues in its immediately preceding financial year, depending on the type of violation.  Individuals who commit such an offence may face a fine in the discretion of the court or to imprisonment of up to five years less a day, or both, depending on the violation. 

Aside from Canadian federal government institutions, anyone who designs, develops, makes available, manages or operates a technological system in Canada that, autonomously or partly autonomously, processes data related to human activities in order to generate content or make decisions, recommendations or predictions should pay attention to this proposed law and start planning to comply with it.

Peter Murphy can be contacted at

For more information, visit


Peter Murphy Head Shot

On June 14, 2022, Bill C-26 was introduced into the House of Commons of Canada. Bill C-26 proposes to enact the Critical Cyber Systems Protection Act (CCSPA), among other things, which would make certain federally-regulated private-sector organizations subject to new legal requirements regarding their cyber infrastructure.

The CCSPA’s purpose is to “protect critical cyber systems in order to support the continuity and security of vital services and vital systems”. If enacted, it will apply to designated operators that own, control or operate critical cyber systems.

A cyber system is considered “critical” where a compromise of the cyber system’s confidentiality, integrity or availability could affect the continuity or security of a vital service or a vital system.

“Vital services and vital systems” are listed in Schedule 1 of the CCPSA, which currently include:

• nuclear energy systems;
• interprovincial or international pipeline or power line systems;
• telecommunications systems;
• federally-regulated transportation systems;
• banking systems; and
• clearing and settlement systems.

“Designated operators” will be listed in Schedule 2 of the CCSPA, by order of the Governor in Council. Designated operators will be required to establish and maintain a cybersecurity plan that sets out the designated operator’s reasonable steps to do the following:

• identify and manage risks to its critical cyber system, including risks associated with the designated operator’s supply chain and its use of third-party products and services;
• protect its critical cyber systems from being compromised;
• detect any cybersecurity incidents affecting, or having the potential to affect, its critical cyber systems;
• minimize the impact of cybersecurity incidents affecting its critical cyber systems; and
• do anything else that is prescribed by the regulations.

In addition, designated operators will be required to:

• conduct cybersecurity program annual reviews;
• mitigate cybersecurity threats arising from the supply chain or from third party products or services;
• share their cybersecurity programs with the appropriate regulators;
• report cybersecurity incidents to the Canadian Security Establishment;
• comply with cybersecurity directions from the Governor-in-Council; and
• maintain related records.

The CCSPA will apply regardless of whether personal information is involved. As a result, the CCSPA will impose cyber security requirements on subject organizations separately from the requirements of Canadian private sector privacy law, in a manner that is similar to the Office of the Superintendent of Financial Institutions’ cybersecurity guidelines that currently apply to Canada’s federally regulated financial institutions.

The CCPSA contains significant enforcement provisions. Regulators of designated operators will be given investigatory, auditing and order-making powers, including the ability to enter into compliance agreements. They will also be empowered to issue administrative monetary penalties of up to $1,000,000 per day for individuals and up to $15,000,000 per day for organizations. In addition, the Federal Court will be given jurisdiction to issue fines against, or order the imprisonment of, designated operators and their directors and officers.

Click here to download a PDF copy of this article. 

641 New Publication(s) found.
< Page of 214 >


eBulletin Subscribe to our eBulletin via email now  
RSS Feed Subscribe to our RSS Feed now