In this episode of Immigration Uncovered, host James Pittman provides an update on the evolving regulation of artificial intelligence (AI) use in legal practice. He discusses why AI regulation is important, how existing ethical rules apply, new court rules being implemented, resources for tracking AI regulations, and ways attorneys can get involved in shaping AI policy.
Key Discussion Points:
James Pittman: Welcome to Immigration Uncovered, the docketwise video podcast. I'm James Itman. This is episode 31, and today, we're giving an update on the evolving regulation of the use of artificial intelligence in legal practice. Now you might be wondering why this topic is so important. Well, let me break it down for you.
James Pittman: The regulation of AI as it's used in legal practice, and by regulation, we're talking about rules put out by courts, government agencies, and lawyer disciplinary committees, is crucial for several reasons. And And I'm also gonna talk about why you, if you are an attorney, legal practitioner, you really should get involved in this debate. Firstly, it's about ethics and accountability. AI technologies have the potential to significantly impact decision making and outcomes. Regulation ensures that these technologies are used ethically, transparently, and with accountability to safeguard the rights of individuals and maintain the public trust in the legal system.
James Pittman: Secondly, you have to consider fairness and justice. Effective regulation of AI helps to prevent biases and discrimination in the algorithms that artificial intelligence uses, and, thereby, it helps to ensure fair treatment and access to justice for all individuals regardless of their background or circumstances. Moreover, regulations provide guidance on how the technology should be implemented to comply with the existing legal framework for practice, and this includes things like privacy laws, intellectual property rights, and data protection regulations, and certainly professional standards. Legal professionals are, of course, held to high standards of conduct and competence. These are cod ified in the disciplinary rules and rules of professional responsibility as they're called in some states. Regulation ensures that lawyers and law firms using AI technologies adhere to these standards and thereby avoid unethical or negligent practices. So ultimately, clear regulations instill confidence in AI systems and both among legal professionals and the general public. So this confidence is essential in order for the widespread adoption of AI technology and law to be accepted and for it to become a valuable tool in legal practice. So in today's episode, we're gonna take a closer look at this evolving landscape, and we'll review some online resources where information about regulations, court rules, and ethical opinions are compiled and also point to some resources where you can follow the debate as it's evolving as it's evolving and get get involved. So stay tuned as we explore this important topic and its implications.
James Pittman: Let's get started. So let's just recap some of the ways in which lawyers might use artificial intelligence in their practices. Lawyers can use AI in various ways to streamline their work processes, enhance their efficiency, and provide better service to their clients. Some of the main ways lawyers might use AI include, first, legal research. AI powered legal research is very powerful.
James Pittman: It can be done much more quickly than through other means, through manual means. AI research tools can quickly analyze vast amounts of data, including legal documents, cases, statutes, regulations, and so on to provide to distill it and provide relevant information and insights that practitioners can use for building legal arguments, drafting legal documents, and advising our clients. Secondly, document review and analysis. I mean, back in the day, you know, for example, in large litigations, boxes and boxes of documents would be procured through in the discovery process, and those had to be sorted through and reviewed and analyzed by human beings, which was enormously labor intensive and time consuming. But nowadays, AI algorithms can review and analyze large volumes of documents, contracts, agreements, and the like to identify key clauses, key passages, important words, etcetera, and to identify potential risks, inconsistencies, and extract the relevant information, which all of which saves a ton of time and effort.
James Pittman: 3rd, way is contract drafting and analysis. So AI is also very powerful in in this arena, and AI powered contract drafting tools can create customized contracts and agreement templates, or customized contracts and agreements based on predefined templates, clauses, and parameters. These tools can also analyze existing contracts to identify potential legal issues, discrepancies, and areas that the documents need to be refined and improved. The 4th way that AI can be used in law is predictive analytics. So AI algorithms can analyze outcomes of past cases, court decisions, and legal precedents in order to predict likely outcomes in future cases, to assess risk and provide strategic insights for using case strategy, settlement negotiations, and litigation tactics. So you look at what's gone before and then you can make an intelligent prediction as to what the likely outcome is in your present circumstances and use those insights. The 5th way is for due diligence and compliance. I mean, AI technologies can assist lawyers in conducting due diligence investigations in compliance audits, and various regulatory frameworks, and regulatory review is by analyzing relevant data, identifying potential non compliance issues, and risks, and generating comprehensive reports. Another way is legal analytics and insights. So, AI powered legal analytics platforms aggregate and analyze data from various sources to provide actionable insights, trends, and patterns in legal practice areas, industries, and jurisdictions, and that helps lawyers make informed decisions and strategic recommendations for their clients. Within the law firm as well, analytics about how your practice is running, what the various performance indicators look like can be extracted and analyzed using AI. And then that enables you to see how your business is doing and what areas you need to improve and become more efficient. The 7th way is natural language processing, so NLP. NLP technologies enable lawyers to extract, interpret, and analyze unstructured textual data from emails, memos, transcripts, and other sources to facilitate document categorization, information retrieval, and sentiment analysis. So, overall, we can say that artificial intelligence has the potential to revolutionize the legal profession by automating routine tasks, improving your decision making, reducing costs, and enhancing the overall quality of legal services and client experience. Now let's talk about why, again, why regulation of AI is necessary and why it's such a pressing issue. So as lawyers are now increasingly incorporating AI technology tools into their work, we have to consider the ethical and implications and ensure that these advancements align with the principles of professional responsibility. So the legal profession, of course, has always operated under a set of ethical guidelines, is often referred to as the ethical canon, and that's designed to uphold the integrity of legal services and protect the interest of clients. However, the rapid evolution of AI presents challenges that demand care for consideration. When it comes to the use of AI in legal practice, there's a balance to strike between leveraging the technology to enhance your efficiency and your effectiveness while still upholding the core values of legal profession.
James Pittman: That's where the ABA, for example, the ABA model rules of professional conduct come into play, providing a framework to guide lawyers in navigating ethical considerations. So we're gonna talk about how the ABA model rules, which, of course, are then, you know, states, sort of enact their own ethical rules, but many times taking from the ABA model rules or adapting the ABA model rules, sometimes, you know, taking them without much change, and that becomes your state disciplinary rules. And now we're gonna talk about why the ABA model rules of professional conduct come into play when you're talking about the use of AI and which which rules specifically are implicated. By examining how AI intersects with key ethical rules such as competence, diligence, confidentiality, communication, the duty of supervision, and the duty to charge reasonable fees, we can gain insight into the ethical challenges and responsibilities that arise in this age of AI and the illegal services. Ultimately, the regulation of AI and legal practice isn't just about complying with rules and regulations.
James Pittman: It's ensure it's about ensuring that lawyers uphold the high standards of professionalism and integrity while harnessing the power of the technology to serve their clients effectively. This is what needs to be balanced. So let's delve deeper into these issues. So courts and government agencies might choose to implement their own rules as well regarding AI use and legal practice for several reasons. So even though every state has an ethical canon and even though the ABA has its model rules, you still may you still will see at this time individual courts and some government agencies implementing their own rules regarding AIUs.
James Pittman: And they might wanna do this to make sure that, they're ensuring accountability, transparency, and confidence in their institution's integrity as the profession as a whole grapples with this technology. So even though we've got our ethical rules which, you know, establish the framework, there are still we're we're in a process of continuous change, and there's many, many aspects. This technology is new, and there's many aspects that have not ever been sort of, you know, let's say, pronounced upon by ethics boards and the like. And so, you know, those those issues have not really been quantified, in ethical opinions. Therefore, many pool many courts are taking the initiative to, establish standing orders of their own or local rules, etcetera, about how lawyers can use AI.
James Pittman: So some of the reasons why they might want to do this, include standardization. Right? So by establishing specific requirements, such as mandating that lawyer certify pleadings or or motions that they've created using AI, courts and agencies can create a standardized approach that promotes consistent consistency and clarity, of course, legal proceedings that are being adjudicated in their in their court. This standardization can help to streamline their processes and reduce potential ambiguities. Quality assurance, you know, requiring practitioners to certify, AI generated work and verify that they have personally checked citations, that helps to ensure the quality control and accuracy in legal filings. It helps ensure the quality control and accuracy in legal filings. It helps to mitigate the risk of errors or misleading information. There have been a couple of high profile cases arising in various places, in both federal and state courts where lawyers have, you know, submitted briefs or pleadings that contained citations to fictitious cases that, you know, the the AI system that they the AI tool that they use sort of created, or hallucinated these fictitious citations, and they've submitted that without having checked to make sure that those citations accurately used. That's a that's a massive no no. Every you everyone knows not to do that.
James Pittman: Everyone who's a lawyer and, etcetera know not to do that. Another reason is the ethical oversight. While existing ethical rules do provide general guidance on lawyers' professional conduct, courts and government agencies might feel the need for more specific oversight tailored to AI's unique challenges. So by implementing rules relating to AI use, that allows them to address emerging ethical concerns, such as AI's impact on attorney competence, confidentiality, and the unauthorized practice of law. For example, lawyers have a duty to supervise staff that are working in their law practice.
James Pittman: So you might have a situation where a lawyer is allowing paralegal to use AI tools to help to create arguments and find citations for for use in a pleading or a motion or a brief. Of course, the lawyer has the duty to check all of that themselves before submitting it to the court. That is, the duty of diligence and competence. Additionally, if a person who is not duly admitted to practice or to use an AI tool to create arguments and motions and then, let's say, charge a client a fee for that. That, of course, constitutes the unauthorized practice of law, which is illegal, but also lawyers must never be in a position where they are facilitating unauthorized practice by any person who is not actually, admitted to practice law in that state. So by setting clear expectations and requirements, courts and agencies can ensure that lawyers are upholding ethical standards when using AI technology in cases that they're adjudicating. So, also, you have to consider the effect of the technology and its potential impact on public trust and confidence. So enforcing rules regarding AI use demonstrates a commitment on the part of institutions to maintain the public trust and confidence in the legal system. That is super important for us to have a functioning and trustworthy legal system. We are a nation under law, and that requires that public confidence in the legal system. So these rules reassure litigants, clients, and and other parties involved in the legal proceedings, that when cases are being worked on and, some of the work is being done with AI assistance, rigorous standards of fairness, transparency, competence, ethical conduct, etcetera, are being adhered to. So upholding these principles will foster trust in the legal system and the especially the judiciary's ability to deliver justice impartially and equitably. So overall, courts and government agencies have a vested in promoting accountability and safeguarding integrity. So these rules are meant to address potential risks and ensure that technology complements rather than compromises the illegal professions or values and responsibilities. I was looking at and I wanna share with you, an article which was in Bloomberg Law that came out recently, let's say, February 28th.
James Pittman: This article by David Latt in Bloomberg Law discussed the current trend of judges implementing AI specific orders and rules in the legal profession. And you should also read, chief justice John Roberts, of course, chief justice of the Supreme Court, his 2023 year end report in which he discusses AI technology. He actually gives an overview of the evolution of technology, which is a pretty interesting read, but part that concerns us is his section on AI and highlights both the potential benefits and concerns regarding AI's impact in the legal system. So make sure you read that as well. But getting back to David Latt's article in Bloomberg Law from February 28th, he recapped a couple of really important points, and that's, first of all, there are at least 21 federal judges at the trial level who have already issued standing orders regarding how AI should be used in their courts.
James Pittman: The 5th Circuit Court of Appeals is considering a proposal that would require attorneys to confirm that they've checked the accuracy of all the AI generated material, and the 9th circuit court of appeals has actually created an AI committee that could very well end up proposing AI related rules. And the 3rd Circuit has done the same thing, created an AI committee. State courts as well are also convening AI committees to study the issue. So it seems like some judges are really eager to create regulations around AI use. Now David Latt, in this article, argues that they should forbear and hold off doing that.
James Pittman: And the reason why he he argues that is he believes that existing ethical standards and professional responsibility rules already address lawyers' responsibilities in ensuring the accuracy of their work. So if you are having additional local court rules, in Lat's view, that creates an unnecessary burden that these additional AI specific rules would impose on lawyers. And it also, he, argues, sends a negative message about the potential for artificial intelligence. It might scare lawyers, who are not familiar with the technology, who are trying to get up to speed, who are trying to grapple with this huge societal change that we're all going through with the advent of AI. It might scare them away from experimenting with the technology and and really trying to do their best to to get up to speed and learn how to use it in their practice.
James Pittman: So he emphasizes that AI, in his view, is just another tool in the lawyer's tool kit and that any mistakes made with it ultimately fall on the lawyers themselves. Other important influential thought leaders in law think this as well. We're gonna talk about that in a second. But one important point is that Lapp points out that there could be inconsistency and potential conflicts that occur when individual courts adopt their own AI rules instead of advocating you know, instead, what he argues that the judiciary should be advocating for a uniform and coordinated approach to the necessary rulemaking. In his view, overregulation by individual judges is premature and could hinder innovation in legal profession. So some of the other thought leaders who have chimed in and were also cited in this article, are Carolyn Elliphant, and she also, was quoted as saying, generative AI is nothing more than the canary in the coal mine. The toxic uses of it are all completely human. Like a canary in coal mine, an AI disaster points to another problem with disincompetent or unethical lawyers. There's nothing wrong with a canary. As for me, my own take is that attorney ethical rules already comprehensively cover how attorneys must practice, including the use of AI tools when extended to those tools.
James Pittman: These ethical rules, such as the AI model rules of professional conduct and state specific attorney ethical codes already set forth guidelines and obligations for attorneys to ensure the accuracy and integrity of their work regardless of the tools that they use. By imposing additional AI specific rules at the court level, there's a risk of duplicating ethical standards and creating unnecessary burdens as I mentioned before. It could lead to inconsistency and confusion across different jurisdictions as each board could potentially adopt its own set of rules, which complicates compliance for attorneys practicing in multiple courts. Instead of creating piecemeal regulations, I also agreed that would be more prudent for the legal profession to rely on existing ethical rules and then consider uniform coordinated approaches to AI regulation at a broader level. So I wanna talk about the next point that I wanna make is about the New York State Bar Association's task force on AI, and they came out with a report. And let's just take a look at that. So I have up, on the screen the guidelines for how specific rules should be followed with regard to AI, and the part that we're really concerned about begins here on page 58. So in the New York State Bar Association's report of the task force on AI usage, they actually give some guidelines. And what their guidelines are is they talk about how specific New York attorney disciplinary rules, and they go through several of the rules, are implicated when you are using AI, and they give guidance about what practitioners should do when using AI to make sure that they remain in compliance with the rule. So let's go through some of these. So the first one is attorney competence, and that's rule 1.1. A lawyer should provide competent representation to a client. So their guidance is that an attorney has the duty to understand the benefits, risks, and ethical implications associated with the tools, and they give an appendix, appendix b, which lists a number of resources for how lawyers can get up to speed on what AI is, how it is used in legal practice, and how they can master it. So you should definitely and I'll post some links as well in the, caption that goes along with the video of today's episode to the report and to those resources. So I urge you to check out both the report of the NYSBA as well as its appendix b of those for those resources.
James Pittman: So also scope of representation, they mentioned that, you should consider including in your client engagement letter a statement that AI tools may be used in your representation of the client and seek the client's acknowledgment. They also, yeah, and so they also give an appendix c with some sample language to include. Rule 1.3 is diligence, and they urge lawyers, to consider whether using AI tools will aid your effectiveness when representing your client. Communication and the guidances, which is rule 1.4. The guidances, while the tools can aid in generating documents or responses, the attorney still must ensure that they maintain direct and effective communication with their client and not rely solely on content generated by AI. With regard to fees, they mentioned that if using AI tools makes your work on behalf of a client substantially more efficient than your use or failure to use such tools may be considered as a factor for determining whether the fees you charge for a given task matter are reasonable. So keep in mind, if you if you find that you charge a certain fee when you were not using AI, but with AI you can get it done in 25% of the time. You know, consider whether your fee remains reasonable, etcetera. So just basically consider the efficiencies that you're creating and how that factors into your fee setting. For confidentiality, they say that when you're using AI tools, you must take precautions to protect sensitive client data and ensure that no tool compromises confidentiality. This is a big one. Right? This is something that a lot of people are worried about when they're using, you know, the public facing, you know, free, let's say, version of chat gpt or other tools that certainly you, should not put any sensitive client data into a tool where the owners of that, platform are utilizing the data for training purposes. And make sure that you carefully check the website and all of the terms of service and terms of use to know with the subscription that you're using whether your data will or will not be used for training purposes, and definitely seek a level of subscription and a platform where it will not be for training purposes if you're going to input sensitive data. So even if your client gives informed consent for you to input confidential information into a tool, you should obtain assurances that the tool provider will protect the client's confidential information and keep all of your client's confidential information segregated. Further, you should periodically monitor the tool provider to learn about any changes that might compromise confidential client information. Conflicts of interest. This one, I think I, you know, I I it wasn't immediately clear to me what they were actually specifically referring to here. They mentioned your use of AI tools on a particular case may potentially compromise your duty of loyalty under rule 1.7 and by creating a conflict of interest with another client. It's interesting, how they think that. I'm looking forward to further elaboration on, you know, what those potential conflicts could be, how how exactly that could arise. Rule 1.7 imposes a duty to identify, address, and if necessary, seek inform informed consent for conflicts of interest that may resolve for use of AI tools. And it goes on. So, we're not gonna go through all of these. Well, maybe just highlight a couple more. Supervisory responsibilities. We already talked about supervising lawyers have to ensure that the staff over whom they have oversight observe the ethical rules. That has to do with and then also subordinate lawyers. If if you're a subordinate lawyer, utilize the tools as directed by your supervising attorney. You're independently required to observe the ethical rules. Responsibility for nonlawyers, right, when you're supervising paralegals, legal assistants, you have to make sure that their use complies with the ethical rules. Right? For professional independence of your judgment. So AI tools are not a person, but nevertheless, you should refrain from relying exclusively on the opinions generated by an AI tool just like just as, you know, as a lawyer and a professional, you would not uncritically adopt the opinions and attitudes of another person without using your own independent professional judgment. Also, unauthorized practice of law, we already talked about that. And advertising attorneys remain responsible for all content that they post, including AI generated content. Pretty much, those are the ones I wanted to go over, but definitely, review, this report. Again, this is the New York State Bar Association's report on its task force report of its task force on artificial intelligence. The next thing I wanted to go over was the Rails project. And so you can see let's just back up a second. You can see that it's it's it's an evolving landscape. Right? You've got task force for various states, and you've got courts making rules. So it's a good idea to have a place where all of these new rules and decisions are, you know, kept track of in a database. And a database exists that's doing that, and it's called Rails.
James Pittman: And it's, it is created or managed by Duke University's Center For Law and Technology. So let's just take a look at that. I'm gonna talk about Rails, the Rails Project. Again, this is Duke University School of Law Center For Law and Technology, has created this, and it is an initiative for responsible AI and legal services. So what they do is they are tracking you can see they have a table here, where they are tracking the court decisions from various courts around the country. You're gonna find federal decisions. You're gonna find state decisions. You're also gonna find decisions and from rather, orders decisions, etcetera, from courts in other countries as well. You can see here you've got Yukon from Canada. You've got, you'll see New Zealand in here, of course, in New Zealand, etcetera. But some of the interesting ones you can see, some of them are pretty straightforward where they just say, basically, you have to certify, submit a certification whether you used AI tools in creating, you know, a particular document. And some of them are just guidelines where, they just tell you so we look, for example, at, let's say, policy or guidelines like this. Your this is the chief administrative hearing officer. They they just give you, guidelines, use of AI by hearing officers, or, let's say, standing order like that. Any party using AI in the preparation of materials must disclose in the filing that an AI tool is used to conduct legal research or was used in any way in the preparation of the document. So that's an example. That's but some of them, you can see, go beyond that, and this is, I think, where we start to, you know, where it's become a bit much. For example, here in Wichita County, Texas, they have a certification. This court, for example, has created its own certification where you have to sort of sign off on this certification for every document that you create, and it's fairly strongly worded. I mean, for example, I have reviewed the court's standing order. I will comply with it. All the information created was verified before submission using traditional non AI legal sources by a human being. And this, for example, I understand that I will be held responsible and subject to possible sanctions under exosdisciplinary rules of professional conduct and the inherent power of the court or for contempt of court. I mean, this is, you know, I mean, this is, you know, where it gets a little bit heavy, I think. So my point is that all of these orders and guidelines are being tracked in this table, and, the rails is a great resource. So it's gonna be fascinating to see how this database evolves as time goes by. Alright. So I just wanted to make sure that everybody knows about Rails. And now, the next thing I wanted to mention was, basically, get involved. Right? So my call to action for this whole episode and why I've gone through all this is to tell you to get involved. Attorneys can get involved in the debate about how AI should be regulated in legal practice in several ways. The first thing you should be doing is participating in bar association committees. Obviously, as we've seen, many, bars now have task for task forces or committees focused on AI or technology more generally, and get involved in those, both for your own professional development as well as to make sure that your voice is heard in this debate. Engage with other professional organizations, the ABA, Local Bar Associations, and other associations host events, conferences, and webinars on AI regulation and big law technology.
James Pittman: 3rd, submit comments on proposed regulations. Right? When government agencies or regulatory bodies, like federal agencies that are gonna make regulations, open a comment period and where the new rules are proposed. So you should definitely be getting involved in submitting comments to provide feedback and suggestion. You can also write articles and op eds and blog posts to share your perspective on AI regulation in legal practice and how you're using it in practice and what you think about regulation. So by publishing your insights in legal publications or media outlets, you as an attorney can contribute to the public discourse and raise awareness of the issues. Also, get involved in advocating for the development of ethical guidelines more broadly, right, within your firm, within your professional network, within industry associations, and promote ethical standards. So by promoting ethical standards, attorneys can help to ensure responsible and ethical AI adoption by your colleagues. So be an advocate for ethical standards, fixed collaboration with the legal tech companies themselves. So attorneys can collaborate with legal tech companies and start ups that develop AI tools for legal practice. You can use tools that are in beta, and you can give feedback. That's enormously helpful to those start up companies. You can test their products and give them your insight, your feedback, share your expertise, and that's another way that practitioners can influence the develop development of AI solutions and influence them in a way that aligns with legal ethics and regulatory requirements. Next way is by educating your clients and your colleagues in general. Educate your clients, your colleagues, and your staff about the benefits and the risk of AI and legal practice and the importance of regulatory compliance. So by fostering awareness and understanding, attorneys can empower others to make informed decisions about AI adoption. So overall, active engagement and advocacy within the legal community, within industries are essential for ensuring that your voice is heard in this ongoing debate. Well, your audience, that's pretty much it, what I wanted to go over for today. So as I wrap up, I wanna leave you with some closing thoughts. It's clear that the regulation of AI in legal practice is a topic that's not going away anytime soon. And as attorneys, for those those of us that are, it's crucial that we stay informed and actively engage in the ongoing debate. So whether it's joining a a committee or a task force, participating in an event, or advocating book guidelines in the industry, there's plenty of ways for us to make our voices heard. By working together and staying proactive, we will help shape the future of AI in the legal profession and ensure that it's used responsibly, ethically, and in the best interest of clients. So thank you for tuning in to Immigration Uncovered, and until next time. Take care.