Artificial intelligence (AI), specifically generative AI, continues to be a topic of conversation and debate amongst legal professionals. The advantages that generative AI platforms such as ChatGPT bring to lawyers, such as saving time on administrative tasks, doing quick research, and more, have made AI very popular among immigration lawyers.
But the rise of AI in the legal profession has brought with it concerns, specifically around privacy. Until recently, ChatGPT had no option for users to opt out of sharing data with the AI, including using said data to continue training the model, and the way this data is used poses concerns for lawyers about how safe client information is inside generative AI platforms, whether they are breaking ethics rules regarding confidentiality, etc.
In this article, we look into the importance of data privacy in AI and why it’s crucial, some AI platforms making a greater effort to safeguard user data, and some features of generally secure AI platforms you should be aware of.
What makes AI models such as ChatGPT so impressive is their ability to learn with each query. Every time you put a question into a generative AI platform or enter some information, the platform, in a sense, expands its knowledge base. However, as a lawyer, you are responsible for keeping your client’s data safe.
Most websites and online platforms have privacy policies you must agree to use. These documents are usually long and full of legalese not often fully understood to the general public, which raises the concern of whether web users genuinely understand the consent they’re giving regarding their data.
One of the most significant advances in legislation for online privacy comes from the European Union (EU) in the form of the General Data Protection Regulation (GDPR). Not only does this law require websites operating in the EU to provide more detailed opt-in and opt-out data sharing options it requires decisions that are made automatically based on personal information that could affect someone’s employment status, credit, or insurance; users' ages, and users have the right for the decision to be reviewed by a person.
However, legislation regarding data privacy online is not universal, and only some of the same protections exist for online platforms across the globe. Possible discrimination some users may face due to the data access companies have is one of lawmakers' most pressing concerns in ensuring data privacy laws genuinely protect users. It’s also difficult to predict how much machine learning will advance and what AI will do, making it difficult to legislate for future situations preemptively.
Some AI platforms are already taking steps to put strong data privacy protections in place, including some that are regularly used by the legal industry and that we have discussed in previous articles, such as transcription services.
Let’s look at how some AI platforms create mechanisms designed to keep data safe and confidential, whether legally required.
As concerns over the privacy and use of data captured through AI tools have grown, developers have taken increasing protections, accountability, and user options into their own hands. Some are more exhaustive than others, but all are worth mentioning:
The explosive popularity of chatGPT brought with it many privacy concerns surrounding AI to the surface. In response to that, ChatGPT has, to its developer OpenAI’s credit, made changes.
It’s important to point out that pressing factors pushed the change: Not only were there raising concerns about how ChatGPT handled data and how private it kept it, but the platform had a recent data breach. Privacy and data handling concerns prompted some countries in the European Union to restrict the use of ChatGPT. Moreover, the Federal Trade Commission had received complaints asking for a pause in developing AI technologies altogether due to safety concerns.
In addition to its “basic” model changes, OpenAI has also announced a ChatGPT Business version. The expectation is that this subscription-based service will have even tighter controls over how users allow OpenAI to utilize conversational data.
Most of the conversation surrounding data privacy concerns covers word processing data, so there is a bit more gray area regarding data captured by voice and video, which is the data that a platform like Fireflies.ai, an AI-powered recording, transcription and note-taking app, takes in. Fireflies prioritizes data privacy by providing end-to-end encryption, not just for transcripts, but all data that goes through their platform, including metadata such as calendars, emails, and user settings. That data only exists on the Fireflies servers for one year, and after that, you cannot recover it. Fireflies stores data in industry-trusted servers Amazon Web Services (AWS), which holds data for some of the largest companies in the world.
Fireflies gives the user control of the recording process from the beginning by giving users the choice of who gets access to each record. Fireflies offers several options for every meeting so you can control the access participants and people outside of your organization may have to a transcript and how they may or may not share its contents with others. OpenAI cannot use data from meetings Fireflies transcribes to train the AI; Fireflies routinely conduct third-party audits of their security processes to ensure confidentiality is protected.
The need for stronger privacy protections within AI platforms used in the legal space has become even more accentuated with the increase in law firm cyberattacks in the past three years: over 750,000 Americans had their personal information compromised due to cyberattacks on law firms. Since lawyers are bound to rules of professional conduct that include ensuring the confidentiality of their client’s information, lawyers must use online platforms that meet the highest security standards.
With that in mind, CaseText has partnered with OpenAI (yes, the creator of ChatGPT) to create CoCounsel, a generative AI designed explicitly for legal work and made with an additional “governance layer” to ensure security and compliance and monitor the use of the AI and how it uses clients’ personal information. This agreement stipulates that OpenAI cannot use CoCounsel data to train its AI.
Let’s finish with some features lawyers should look for in any AI platform regarding how they safely handle user data.
So, whether you want to use AI to takes notes during client meetings, draft template documents for your cases, craft or revise client correspondence or even just use ChatGPT to create social media content, it’s important to know to know what each platform will do with the data you give it, and to make an informed decision based on that knowledge.
As a case management tech platform created by and for lawyers, Docketwise understands the importance of protecting client data and keeping lawyers in compliance with ethics rules, so data security is at the top of our priority list at all times. With data backed up every five minutes, you can rest easy that even in the highest volume seasons at your immigration law firm, Docketwise is ensuring you have access to your data, that it’s stored safely, and that we’re constantly keeping data privacy and security top-of-mind.
From an entire library of immigration forms to client questionnaires in multiple languages to an industry-leading set of API integrations, we help you stay current on all your cases, communicate easily with your clients, and otherwise build and manage your firm.
If you want to learn more about Docketwise, schedule a demo at the link below!