Image default

Canadian privacy commissioner launches investigation into ChatGPT

The case is in response to a complaint alleging non-consensual collection, use of personal information.

The Canadian privacy commissioner has launched an investigation into OpenAI, the company behind the artificial intelligence (AI) chatbot ChatGPT.

The Office of the Privacy Commissioner of Canada (OPC) said on Tuesday that the case was launched in response to a “complaint alleging the collection, use, and disclosure of personal information without consent.” BetaKit has reached out to OpenAI and the OPC for comment.

“The speed at which it’s moving is outpacing our ability to make sense of it, know what risks it poses.”
– Emilia Javorsky, Future of Life

This investigation follows a series of recent moves by the federal government and members of the AI research community in regulating the development and deployment of the technology.

Other countries have also begun to crack down on the mass adoption of ChatGPT. China, which has also banned Google, Facebook, Twitter, and other digital platforms in previous years, reportedly blocked access to ChatGPT in February.

More recently, Italy’s privacy regulator ordered a ban of ChatGPT in March this year, with similar allegations as the Canadian privacy commissioner claiming that the platform breached Europe’s privacy regulations. Reuters reported that privacy regulators in France and Ireland reached out to counterparts in Italy to learn more about the basis of the ban. It also reported that Germany could follow suit by blocking access to ChatGPT over data security concerns.

RELATED: Will ChatGPT replace BetaKit?

The March open letter from a number of leaders in the AI space called for a six-month pause on advanced AI, specifically on training systems that are more powerful than GPT-4, as the next iteration of Open AI’s ChatGPT (GPT-5) is rumoured to be released by the end of this year.

“The speed at which it’s moving is outpacing our ability to make sense of it, know what risks it poses, and our ability to mitigate those risks,” said Emilia Javorsky, director of multistakeholder engagements at the Future of Life institute, in relation to the open letter. “Six months gives us the time to create governance around it and to understand it better. It buys us time for those conversations, risk analyses and risk mitigation efforts.”

Last year, the Canadian government tabled Bill C-27, a wide-ranging privacy legislation that included what would be Canada’s first law regulating high-impact AI systems. If passed, the bill would implement a regulatory framework for the design, development, use, and provision of AI systems. In addition to promoting transparency regarding the systems training processes, it is also expected to enforce measures to mitigate risks of harm and biased output.

Featured image from Unsplash.

Source link

Related posts

Guardz emerges from stealth with $10M for SMB security and cyber insurance to protect against attack-as-a-service breaches

Jason Dudley

TVS Motor’s Q2 PAT rises by 47% yoy; to issue ₹310 cr zero-coupon bonds to subsidiary

Jason Dudley

Go First cancels flights, files for insolvency

Jason Dudley

Web3 gaming startup Kratos Studio raises Rs 160Cr in a seed round, acquires IndiGG

Jason Dudley

Q1 2023 market map: SaaS cost optimization and management

Jason Dudley

Redaptive Nabs $200M As US Races To Lower Carbon Footprint

Jason Dudley

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More