Some Quick Thoughts on the Privacy Commissioners' Joint Report of Findings on OpenAI
Almost three years ago (!) the privacy commissioners of Canada, British Columbia, Alberta and Quebec launched a joint investigation into OpenAI. The lengthy report of findings was released today (May 6, 2026). The investigation covered a range of issues relating to whether the collection, use and disclosure of personal information by OpenAI was for a purpose that a reasonable person would consider appropriate in the circumstances and whether it was sufficiently limited; whether consent requirements were met with respect to the collection of personal information for model training and fine-tuning; whether there was adequate transparency; whether personal information generated by the system was as accurate, complete and up-to-date as necessary; whether individuals had an opportunity to access and correct personal information; whether there were appropriate retention and disposal procedures in place, and whether OpenAI was accountable for the personal information it controlled.
The report is lengthy and detailed. The goal of this post is not to walk you through the different findings and the reasons behind them. Rather, I want to highlight some bigger picture issues that I see arising from this report. Please keep in mind that I am writing in rapid response to the decision. These are initial reactions.
1) Jurisdictional issues continue to be raised. Big US-based companies like OpenAI seem to like to challenge the jurisdiction of the federal and provincial privacy commissioners to investigate privacy complaints that involve them. We saw it with Clearview AI (and continue to see it as we make our way through judicial reviews and appeals in three provinces). We have seen it with Facebook. In this case, OpenAI argued (among other things) that at the time of the facts giving rise to the complaint, they had no physical presence in Canada and were therefore not subject to Canadian law.
The commissioners have resisted these jurisdictional challenges in the past, and they have received support from the courts (see, eg.: the BC Court of Appeal decision and the Alberta Court of King’s Bench decision in Clearview AI). In this case, the commissioners find a real and substantial connection to their respective jurisdictions because OpenAI offered both free and paid services in Canada; it considered its Terms of Service to be applicable to users in Canada; it made its US internet-based services available to Canadians; it transmitted and received personal information between the US and Canada; and it collected, used and disclosed the personal information of Canadians or derived from Canadian sources in the development and use of its models. The finding of jurisdiction is not a surprise. It is, however, disappointing that the applicability of Canadian privacy laws is so regularly called into question.
2) AI and information-based services challenge existing legislation. Information-based services that draw on personal information highlight the intersection of privacy and freedom of expression issues. Only recently, the Alberta Court of King’s Bench in Clearview AI struck down part of the exception for publicly available information in Alberta’s privacy regulations on the basis that it was inconsistent with the freedom of expression. Search engines and generative AI platforms, for example, seek to organize, make findable, or even generate new information for users in novel ways, making the balance between freedom of expression and privacy important. In this case, the Commissioners are clearly sensitive to the interplay between the freedom of expression and privacy law, and they address this balance at several points in their findings.
3) Consent is a hot mess. What this set of findings shows just how inadequate the concept of consent is on its own. The OPC finds that OpenAI did not get consent for its initial collection of personal information. That is a pretty clear breach of privacy law. However, the OPC also found that with mitigation measures in place, OpenAI could ultimately rely upon “implied consent”. Implied consent can be a convenient fiction, but in this case, it frankly moves closer to fantasy. My critique here is more of the law than it is of the rather pragmatic interpretation by the OPC, although none of it is that pretty. In my view, the OPC tries to make implied consent do the work of “legitimate interests” in the absence of any reform of the law. The result is necessarily artificial. In the last round of attempted privacy law reform (Bill C-27), the federal government introduced a “legitimate interests” exception to consent to try to get around the concern of many organizations that “consent” posed a barrier to innovative uses of personal information. What the private sector had hoped for was something more like the GDPR’s “legitimate interests” basis for data processing – an alternative to consent rather than an exception. In the GDPR, this alternative basis for processing comes with clear and explicit guardrails to protect individuals. Bill C-27 fell short of the GDPR concept of “legitimate interests”. Rather than create an alternative to consent (with its own limitations), it chose to offer up “legitimate interests” as an exception to consent and framed around expectations of the data subject. The OpenAI findings show that Bill C-27’s legitimate interests exception would still have been awkward to apply. It also shows that not modernizing one’s privacy laws makes life difficult for both organizations and regulators.
4) Harmonization takes a hit. The joint investigation approach adopted by the four commissioners who have private sector jurisdiction has been used in multiple cases. It has been a lesson in collaboration and has aimed to develop common interpretations and approaches across the country in ways that are beneficial to organizations that may find themselves subject to multiple different laws. It has been working quite well, although Clearview AI’s multiple judicial reviews have revealed some challenges. In the OpenAI case, the wheels start to wobble on the joint investigation bus. The federal, BC and Alberta laws are the most similar, but they are still differently worded and so are open to different interpretations, in spite of being considered “substantially similar”. The differences are in full display in the OpenAI findings. There is no “implied consent” under the BC or Alberta laws, and OpenAI’s practices and mitigation measures do not meet the thresholds in those laws for “implicit” or “deemed” consent. Thus, a matter that is considered conditionally resolved under PIPEDA is not resolved in BC or Alberta. There are also significant differences in outcomes between the federal and Quebec laws. Overall, the OPC finds the complaint to be conditionally resolved after mitigation measures were put in place by OpenAI, but this is not the case in the other jurisdictions.
Federal reform will drive reform in BC and Alberta and could lead to greater harmonization of these laws (or not). We won’t know until we try. (Could we please try?).
6) Pragmatic privacy. This is the “you can’t always get what you want” approach to privacy law. The federal privacy commissioner describes his approach as providing a “pragmatic and flexible” interpretation of PIPEDA. He finds initial breaches of PIPEDA in how the company designed and launched its initial ChatGPT services. However, in the report of findings, he emphasizes the importance of OpenAI’s commitments to address the intent of the recommendations made by the commissioners. The focus of the federal commissioner is clearly on resolution and implementation of changes. There is real value in this approach in that it helps to advance privacy protection with cooperation and engagement from industry. Essentially, it is a negotiated solution to a thorny problem. It likely avoids litigation, and, if the solution is less than perfect, it recognizes that perfect might not have been attainable (or that different people would have had different ideas about what is perfect). It is an interesting result. He does it all with a statute that has long passed its best-by date. It’s like cooking with a pantry product in the same state when you can’t get an alternative. You work with what you’ve got to get the job done. As long as nobody spends the night upchucking, it’s all good.
The commissioners of Alberta and BC might have wanted to take a similarly flexible approach, but they could not because their respective statutes are differently worded than PIPEDA, giving them less flexibility (though in PIPEDA, frankly, the flexibility is more like the sag in the knees of old sweatpants). As a result, these commissioners find that OpenAI is still in breach of their laws. They make further recommendations for changes to practices and indicate that they will participate in ongoing monitoring of the measures to which OpenAI has committed.
7) Not so pragmatic privacy. Quebec’s Commission d’accès à l’information (CAI) has found OpenAI to be largely non-compliant with its law. It makes a number of recommendations and indicates that it will take OpenAI’s response to implementation of recommendations into account when deciding whether to initiate any further action (which could be a new investigation, further recommendations, or orders). This outcome is interesting, because Quebec’s private sector law is the only one of the three to have recently been reformed. The fact that this modernized law is the least “friendly” to OpenAI could be interpreted in different ways. On the one hand, you could argue that the Quebec law is more privacy protective in the face of contemporary technologies; on the other, you could argue that it got the balance wrong and is creating unnecessary barriers to emerging technologies and to business models based on those technologies. This will all be important food for thought in the next round of federal private sector data protection law reform (if it ever happens).
7) Time to reform the law. This is a good case study regarding the need for privacy law reform. It is clear that some existing concepts are dated, awkward and not a good fit for contemporary contexts. We have been waiting a long time for a fix. It needs to come. It is unclear why the federal government continues to delay in reintroducing a privacy law reform bill. BC and Alberta have been patiently waiting for federal action to reform their own laws so that they can be reasonably harmonized and “substantially similar”.
Of course, even if the folks at the Ministry of Innovation, Science and Economic Development (ISED) are just polishing off a new draft bill, they will want to read the OpenAI decision carefully – and then re-read their draft to see if what they propose will actually address some of the problems or whether they will make them worse.

