CHAPTER 10: NAVIGATING AI ETHICAL CHALLENGES

What does it mean to practice law ethically in an age where machines can think and make decisions? As we embrace AI's transformative potential, we find ourselves standing at the crossroads of innovation and responsibility. The same technology that can streamline our practices and enhance decision-making also demands that we scrutinize its impact on our core values. In this chapter, we’ll explore how we, as legal professionals, can harness AI's power while steadfastly upholding the ethical principles that define our profession.

The American Bar Association

Bar associations at the state, federal, and international levels have started to provide guidance on the ethical use of AI. For instance, the American Bar Association (ABA) emphasizes the need for lawyers to understand the technology they use and to ensure that it enhances, rather than undermines, their professional responsibilities. The ABA Model Rules of Professional Conduct provide a framework for integrating AI while maintaining ethical standards, particularly in areas such as competence (Rule 1.1), confidentiality (Rule 1.6), and supervision (Rule 5.1 and 5.3).

In fact, it's my understanding that ABA has taken several key positions and actions regarding artificial intelligence (AI):

1/ The ABA formed a Task Force on Law and Artificial Intelligence to examine the impact of AI on law practice and its ethical implications for lawyers. This task force aims to explore:

2/ The ABA House of Delegates adopted Resolution 604 at its 2023 Midyear Meeting, which outlines guidelines for the development and use of AI. These guidelines state that:

3/ The ABA urges Congress, federal agencies, state legislatures, and regulators to adhere to these guidelines when creating laws and standards associated with AI.

4/ The ABA recognizes the need for lawyers to stay informed about AI developments, as it impacts the legal profession. This includes addressing ethical and legal issues arising from AI use in law practice.

5/ The ABA is actively working to provide resources, webinars, and educational materials to help lawyers understand and navigate the complexities of AI in the legal field.

6/ The organization acknowledges both the potential benefits and risks of AI technology, emphasizing the importance of responsible development and use.

In her recent blog post, "Practical and Adaptable AI Guidance Arrives From the Virginia State Bar," Nicole Black shares her thoughts on the notably concise approach offered by the Bar regarding confidentiality, notice to clients, attorney fees and other related issues. I believe Nicole's post gives a good overview of the current AI legal and ethical issues we all need to pay attention to.

In summary, the positions I’m seeing advanced about AI, the law, and ethics are centered on cautious engagement—recognizing its potential while advocating for responsible development, human oversight, accountability, and transparency in AI systems, particularly as they relate to the client’s best interests. With that in mind, let’s leverage this technology to be faster and better, and let’s do so with our eyes wide open.

The Ethics of Bias

One of the most significant ethical challenges in using AI in law is the potential for bias. AI systems learn from data, and if that data reflects existing biases, the AI can perpetuate and even amplify them. This can lead to unfair outcomes, particularly in areas such as criminal justice, where biased data can result in disproportionate sentencing or wrongful convictions.

A well-known example of AI bias is the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) recidivism algorithm used in the criminal justice system. Studies have shown that COMPAS is biased against black defendants, often predicting a higher likelihood of reoffending compared to white defendants. These kinds of cases highlight the importance of scrutinizing the data and algorithms we use to ensure they do not reinforce societal biases.

Use diverse and representative data sets to train AI systems. This helps to minimize the risk of embedding existing biases into the algorithms. Conduct regular audits of AI systems to identify and address any biases. This includes reviewing the data, algorithms, and outcomes. Be transparent with clients and the court about the use of AI and the measures taken to ensure fairness and accuracy.

Ensuring Transparency and Accountability

Transparency is crucial in building trust with clients, the court, and the public. When using AI, it’s important to be open about how the technology works, the data it uses, and the decisions it makes. This transparency helps to demystify AI and ensures that all parties can have confidence in its use.

As we discussed in detail in Chapter 8, when it comes to e-discovery, AI tools are used to sift through vast amounts of data to identify relevant documents. Being transparent about how these tools work and the criteria they use to identify relevant documents helps to build trust with clients and opposing counsel. It also ensures that the process is fair and that any potential issues can be addressed promptly.

Choose AI systems that provide explainable results. This means the system can provide insights into how it arrived at a particular decision or recommendation. Maintain thorough documentation of the AI systems used, including their data sources, algorithms, and decision-making processes. Keep clients informed about the use of AI in their case, including its benefits and limitations.

Accountability means taking responsibility for the actions and decisions made by AI systems. As lawyers, it’s essential to ensure that the use of AI does not absolve us of our professional responsibilities. We must remain accountable for the outcomes and ensure that AI is used ethically and effectively.

AI tools for contract analysis can significantly streamline the review process and identify potential issues more efficiently than traditional methods. However, the lawyer using the tool remains accountable for any oversights or errors. This underscores the importance of understanding the technology thoroughly and ensuring its proper use. We discussed specifics in Chapter 6.

Ensure that there is always human oversight of AI systems and results. Lawyers should utilize due diligence when reviewing and verifying AI-generated results before making decisions or recommendations.

Stay updated on the latest developments in AI and its ethical implications. This includes attending training sessions, workshops, and conferences. Establish or participate in ethical review boards within your firm or organization to evaluate the use of AI and ensure it aligns with ethical standards.

The ethical use of AI is not confined to state and national borders. As the legal profession becomes increasingly global, it’s important to consider international ethical standards and how they apply to the use of AI. Different countries and regions may have varying approaches to AI ethics, and understanding these differences is crucial for lawyers working on international cases.

The European Union (EU) has taken a forward-thinking approach to regulating AI. Its "AI Act" is groundbreaking regulation on artificial intelligence (AI), marking the first extensive framework established by a major global regulator.

The Act categorizes AI applications into three levels of risk. The first category includes applications and systems posing an unacceptable risk, like government-operated social scoring similar to practices in China, which are prohibited. The second category covers high-risk applications, such as tools that scan CVs to rank job candidates, which must adhere to specific legal standards. The third category consists of applications that are neither banned nor classified as high-risk, and these remain mostly unregulated.

It's a good idea to keep abreast of international developments in AI ethics and regulations. This includes monitoring updates from bar associations, regulatory bodies, and international organizations. Work with colleagues and experts in other jurisdictions to understand and navigate the ethical landscape of AI in different regions.

Safeguarding Client Confidentiality and Data Security

AI tools often require access to sensitive client information. Maintaining client confidentiality and data security is paramount. For example, make sure to ensure that AI systems comply with data protection regulations such as Europe’s new data privacy and security law- the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

Law firms integrating AI-powered document management systems must prioritize the security and ethical use of these technologies. Implementing robust cybersecurity measures, such as encryption and access controls, is essential to protect client data and maintain confidentiality.

Ensure all data is encrypted and access is restricted to authorized personnel only. Use multi-factor authentication to add an extra layer of security.

Consider conducting regular training sessions for all staff members to educate them on cybersecurity protocols and the ethical use of AI. Include practical exercises and simulations to reinforce learning. Maintain a vetted list of approved AI tools and services.

Regularly review and update this list to ensure compliance with the latest security and ethical standards. Involve both IT and legal professionals in evaluating new AI technologies. This collaborative approach ensures that both technical and ethical considerations are addressed.

Enhancing Access to Justice

AI has the potential to enhance access to justice by providing affordable and efficient legal services. We did a deep dive on all related access to justice issues in Chapter 14.

For example, AI-powered chatbots can offer preliminary legal advice to individuals who cannot afford a lawyer. However, it’s important to ensure that these tools provide accurate and reliable information. Legal aid organizations can leverage AI to screen clients and provide initial guidance, thus increasing their capacity to serve more clients and improve the quality of assistance provided.

Ensure that AI tools used in legal aid are accurate and reliable. This includes regularly updating the data and algorithms. Provide training for legal aid staff on how to use AI tools effectively and ethically. Continuously monitor the performance of AI tools to ensure they meet ethical standards and client needs.

Responsible Use in Litigation and Trial Preparation

AI can assist in various aspects of litigation and trial preparation, from legal research to predictive analytics. As mentioned in Chapters 7 (research), 8 (e-discovery) and 9 (litigation), it’s important to ensure that AI tools are used responsibly and do not undermine the fairness of the legal process. Predictive analytics can help lawyers assess the likely outcomes of cases based on historical data. However, this tool should supplement human judgment rather than replace it.

By this I mean to critically evaluate the results provided by AI tools and consider multiple perspectives. Be transparent with clients about the use of AI in their case and the basis for any predictions or recommendations.

As we embrace the evolving landscape of AI, it’s clear that technology is not just a tool but a catalyst for rethinking our legal and ethical frameworks. We've delved into the ethical complexities of AI in law, but our journey is just beginning. Now, let's shift our focus to another critical frontier: the intersection of AI and intellectual property. This is where innovation meets protection, and understanding the balance between creativity and rights will be crucial. So, let's explore how AI is redefining IP and what it means for the future of legal practice.


The "AI In Law" podcast compliments this book. It's your quick dive into how AI is transforming the practice of law. In just seven minutes, get the insights you need to stay sharp and ahead of the curve. Listen on Apple Podcast," Spotify, and YouTube.