16 Oct What are the legal risks of implementing ChatGPT in the workplace?
ChatGPT is an artificial intelligence platform that can write anything its user demands. You can input any list of specifications, and it will compose a well-written work encompassing your requests. For example, you can ask ChatGPT to write poems, songs, essays, memos, etc. and can ask for any specifications like a certain topic, writing style, tone, narrative etc.
This kind of AI is fascinating and can be such a fun and easy tool to operate. The benefit of how easy the platform is to use, can also be its biggest problem. The first thing people think, especially employers and employees, is how much easier this could make their job. But this thought can be dangerous, as there are many legal implications to using ChatGPT in the workplace.
First, the ChatGPT user is liable for ChatGPT’s mistakes. ChatGPT searches for information from all data entered on the internet. This means that there will be biases, false information, or even private information incorporated into the work that could severely damage a company.
This platform can also infringe on copyright laws. If ChatGPT collects information from articles on the internet to craft an op-ed piece about, for example, electronics in elementary schools, the AI platform will pull their information from writers, bloggers, journalists etc. that have all written about this topic. Because ChatGPT uses these articles to acquire ideas, it calls into question whether it is breaching copyright laws that protect these workers.
Copyright laws affirm that the original creator of any work is the owner of that work. According to the Canadian government, copyright theft is when a party breaches any rights outlined in the Copyright Act, for example, sharing, producing, or selling copyrighted work. As stated previously, the ChatGPT user is the owner of the work created by ChatGPT, and the user would be liable for breaching any copyright laws.
Further, ChatGPT can breach privacy laws. Some of the data ChatGPT collects can be personal or private information because it can collect information without a party’s consent.
The ChatGPT user can confirm they will use the platform in agreement with the Data Processing Addendum (DPA) which regulates the data that is processed on this AI platform and protects the user from violating privacy laws. However, the DPA is not automatically applied for all users. The terms of the DPA must be manually accepted by each user for the agreement to be implemented. An employee or customer’s privacy rights could be violated by using, collecting, or sharing their information without their consent, which the ChatGPT user would be liable for.
If ChatGPT has led to accusations of legal violations that has affected your workplace, please contact KCY at LAW by filling in an online consultation request or contact us by phone at 905-639-0999 to book your consultation today.