CHAPTER 12: PROTECTING PRIVACY AND SECURITY IN THE AGE OF AI
Do you know that in today's world your personal data and client information can be weaponized against you without your knowledge. This isn't a plot from a dystopian novel; it's the reality we face as AI continues to infiltrate every facet of our lives, especially in the legal profession.
The Cambridge Analytica scandal was a wake-up call, showing how easily AI can exploit personal information on a massive scale. As lawyers, the duty to protect privacy and ensure security has never been more critical. This chapter dives into the urgent need for robust privacy measures and the ethical use of AI in law, equipping you with the knowledge to navigate this brave new world responsibly.
Protecting Both Your Firm and Client's Privacy Must Be a Primary Concern
AI technologies, from machine learning algorithms to natural language processing, are revolutionizing how we practice law. They assist in document review, predictive analytics, legal research, and even in drafting contracts. However, these advancements come with heightened risks. The vast amounts of data processed by AI systems include sensitive client information, making privacy and security paramount concerns.
As mentioned above, one of the most notorious and eye-opening cases that underscore the intricate intersection of AI, privacy, and security is the Cambridge Analytica scandal. In 2018, the world was shaken by revelations that the political consulting firm Cambridge Analytica had illicitly harvested personal data from over 87 million Facebook users without their explicit consent. Utilizing sophisticated AI algorithms, the firm analyzed this vast trove of data to create detailed psychographic profiles, which were then used to craft highly targeted political advertisements and influence voter behavior in elections, including the 2016 U.S. Presidential election and the Brexit referendum.
This scandal brought to light several critical issues. Firstly, it highlighted how AI can be wielded to manipulate public opinion on a massive scale, raising profound ethical and societal concerns. Secondly, it showcased the vulnerabilities in data privacy, as the data was obtained through a seemingly innocuous personality quiz app, which then mined not only the user's data but also that of their Facebook friends. Lastly, the Cambridge Analytica scandal emphasized the urgent need for robust privacy protections and stricter regulatory oversight.
The fallout from the scandal led to widespread public outcry, multiple governmental investigations, and significant financial penalties for Facebook. It also served as a wake-up call for lawmakers worldwide, prompting the implementation and reinforcement of stringent data protection laws like the General Data Protection Regulation (GDPR) in Europe. For legal professionals, this case underscores the paramount importance of safeguarding client data, ensuring transparency in data usage, and adhering to legal standards to prevent misuse and protect individual privacy in the age of AI.
Before we wrap up this section, let's pivot from the labyrinthine world of Cambridge Analytica-style cases to a privacy and security frontier that's quietly brewing: voice authentication. Amanda Johnstone breaks it down quickly and brilliantly in this short video.
So, here's the scoop: several big players—from private banks to public agencies—are now letting users secure their accounts with just their voice. I've noticed several AI services rolling out voice-activated logins for law firms and clients. While this sounds cutting-edge, Amanda's video reminds us to hit pause and think it through. Be cautious. Set up private safewords. Alongside the tips in this chapter, make sure to dive into all the insights I share on dealing with AI-induced hallucinations and deep fakes in Chapter 13.
“Oops! How Clients Might Accidentally Waive Privilege When Seeking AI’s Help”
The attorney-client relationship thrives on trust, built on the promise that what’s shared between a lawyer and their client remains private. This confidentiality is protected by attorney-client privilege and the work product doctrine, both designed to keep sensitive information safe from prying eyes. But what happens when a client, seeking clarity, shares this confidential information with an AI system like ChatGPT or Microsoft Copilot? The answer isn’t as straightforward as you might think—and the risks could be significant.
The Power of Privilege and Work Product
Attorney-client privilege protects communications between a lawyer and their client, ensuring that anything discussed remains confidential. The work product doctrine safeguards materials prepared in anticipation of litigation, shielding an attorney’s strategy from the opposing side. These protections are crucial, but they’re not foolproof—especially in today’s digital age.
Before AI, attorney-client or attorney work product privileges were typically waived by a client when they voluntarily disclosed confidential communications or protected documents to a third party, either intentionally or inadvertently, thereby breaking the confidentiality that the privilege relies upon. For instance, if a client shared privileged information publicly or with a friend, family member, or non-legal advisor, or if they allowed a third party to review documents prepared in anticipation of litigation, this would generally result in the loss of those protections in legal proceedings.
The AI Conundrum: Real-World Examples
Picture this: A client receives a complicated legal document from their lawyer—a PDF filled with dense legalese. Frustrated, they upload the PDF to ChatGPT and ask, “What does this mean?” Or perhaps, during a phone call, their lawyer provides key advice, which the client then dictates into an AI for a more understandable summary. Maybe they type in a snippet from a legal brief or an email, asking the AI to “translate” it into plain English.
In these moments, the client might unknowingly waive the protections they thought were ironclad. Here’s why:
Third-Party Disclosure: Sharing confidential information with most AI platforms can be seen as disclosing it to a third party, which may waive attorney-client privilege.
Loss of Confidentiality Control: Clients often don’t realize how AI systems store and process data. Even if the client has no intention of waiving privilege, the very act of sharing the information could be enough to compromise confidentiality.
Unintentional Waiver: The courts might interpret these actions as a voluntary waiver of privilege, even if the client didn’t intend to give up their legal protections.
The Consequences: More Than Just a Slip-Up
What’s at stake when privilege is waived? First, the information could be subpoenaed and used in court. Second, if the work product doctrine is compromised, the opposing side might gain insights into the legal strategy. Finally, the client could face increased legal risks, including being forced to disclose further communications.
Here are three specific suggestions you can use as a lawyer to educate your clients and prevent them from inadvertently disclosing privileged information when using AI:
- Include a Clear Warning in Your Initial Engagement Letter
In your initial engagement letter or welcome packet, include a section specifically addressing the risks of sharing confidential information with AI systems. Explain the potential consequences of inadvertently waiving attorney-client privilege or work product protection and provide clear examples to make the point resonate.
This sets the tone from the beginning, ensuring that clients are aware of the importance of keeping communications confidential and understand the boundaries of what can and cannot be shared with third-party systems, including AI.
- Add a Provision in the Attorney-Client Retainer Agreement
Incorporate a specific clause in your retainer agreement that explicitly warns clients from sharing any privileged communications or work product with AI systems or third-party services without prior consultation with you. This clause should also outline the potential legal consequences of such actions.
By including this in the retainer, you clearly advise your client, in writing, to maintain confidentiality and reinforce the seriousness of the issue. It also provides you with a paper trail in case waiver is claimed by the other side or a third party.
- Provide Ongoing Education Through Client Communications
Regularly remind your clients about the importance of confidentiality in your communications, such as through emails, newsletters, or during meetings. You might include a brief “Confidentiality Tip” section in your email signature or periodically send out reminders about the risks of using AI for legal clarification or advice without consulting you first.
Clients may not always remember the initial warnings given at the start of the engagement. Regular, ongoing reminders keep the issue top-of-mind and help prevent mistakes, especially as they navigate complex legal matters.
The Takeaway: Think Before You Share
AI promises quick and easy answers. It’s tempting for clients to use these tools to demystify complex legal matters. But clients need to be cautious. Before uploading that PDF or typing in that legal advice, they should consult their lawyer about the potential risks.
For lawyers, this new reality is a wake-up call to educate clients about the importance of maintaining confidentiality. Clear guidelines on what can and cannot be shared with AI are essential to protecting the integrity and confidentiality of the attorney-client relationship.
Navigating the Complex Landscape of Privacy Laws
In the United States, privacy laws are a patchwork of state and federal regulations, each with its own nuances. As legal professionals, it's crucial to understand these laws to ensure compliance and protect client data.
As an example, the California Consumer Privacy Act (CCPA) is a groundbreaking state law that grants California residents new rights regarding their personal information. It mandates that businesses provide consumers with information about the data collected on them and allows consumers to opt-out of the sale of their data. For lawyers, this means ensuring that any AI systems used are compliant with CCPA requirements, including providing transparency and safeguarding data.
The Health Insurance Portability and Accountability Act (HIPAA) is a critical federal law that protects sensitive patient information. AI systems used in this sector must be designed to comply with HIPAA's stringent privacy and security standards, ensuring that electronic health information is protected against breaches.
Internationally, the General Data Protection Regulation (GDPR) sets rigorous requirements for any business operating within the EU or handling the data of EU citizens, emphasizing transparency, data minimization, and the right of individuals to control their personal information. It mandates strict consent protocols, breach notification requirements, and hefty fines for non-compliance.
Beyond the GDPR, the EU has introduced the Artificial Intelligence (AI) Act, a groundbreaking and comprehensive legal framework for AI, the first of its kind globally. This legislation aims to ensure that AI systems are not only innovative but also trustworthy and safe. It underscores the importance of respecting fundamental rights, ethical principles, and democratic values. The AI Act categorizes AI systems based on their risk levels, imposing stricter regulations on high-risk applications, such as those used in critical infrastructure, law enforcement, and employment.
The AI Act also seeks to balance regulation with innovation, fostering a competitive environment for AI development while safeguarding against the potential harmful effects of AI. It promotes transparency by requiring clear documentation and accountability measures, ensuring that AI systems are explainable and auditable. Additionally, the Act emphasizes the need for robust governance frameworks to oversee AI deployment and address ethical concerns.
Tech lawyer Franklin Graves Franklin Graves recently shared a post on LinkedIn updating all of us with the officially published "EU AI Act." Franklin recommends individuals or companies that are deploying or implementing AI systems (referred to as "deployers" under the Act), review and understand three key obligations: (1) Implement a risk assessment system, (2) Adopt transparency principles and processes and (3) Design responsible AI governance. Follow Franklin for key updates.
Understanding and navigating these international laws is crucial for legal professionals. Staying informed about these regulations helps lawyers advise clients on compliance, mitigate legal risks, and harness the benefits of AI responsibly. As AI continues to evolve, so will the legal frameworks that govern it, making continuous education and adaptation essential for legal practitioners in this dynamic field.
Best Practices for Safeguarding Privacy and Client Confidentiality
In addition to what we've already talked about, it's important to double down on the fact that client confidentiality is the cornerstone of legal practice. AI tools, while powerful, must be used in ways that do not compromise this fundamental principle. Whatever AI platform or service you incorporate into your practice, do a deep dive on the TOS agreements, especially when it comes to encryption, privacy and sharing of data.
For example, encryption is a vital security measure. Ensure that all client data processed by AI systems is encrypted both at rest (stored data) and in transit (data being transferred). This adds a layer of protection against unauthorized access.
Restrict access to AI systems and data to authorized personnel only. Use multi-factor authentication (MFA) to add an extra layer of security. Regularly update access controls to reflect changes in staff and roles.
Choosing secure AI platforms is also critical. Look for platforms with strong security credentials. Platforms that offer robust security features, such as end-to-end encryption, secure APIs, and regular security updates, are preferable. Verify their compliance with relevant privacy laws to ensure they meet the necessary standards for handling sensitive information.
Interestingly, AI is not only a potential risk but also a powerful tool for enhancing cybersecurity. AI systems can detect and respond to threats more quickly and accurately than traditional methods. For example, AI-driven cybersecurity systems can analyze vast amounts of data to identify patterns indicative of cyber threats. An AI system can monitor network traffic and flag unusual activities that might signify a breach. This proactive approach can significantly enhance your firm’s security posture.
Integrating AI tools with existing cybersecurity protocols creates a multi-layered defense strategy. Using AI to augment human capabilities provides real-time threat intelligence and automated responses to potential threats. This synergy between human expertise and AI-driven insights ensures a more robust and responsive security framework.
Risk decision makers may want to spend some time at the MIT AI “risk repository” — a sort of database of AI risks. If you're looking for a firm with experience in these kinds of matters, you may want to reach out to Alex Holden, the founder and Chief Information Security Officer of Hold Security, LLC or one of the many experts available at Experts.com.
Let's Take the New Apple Vision Pro as an Example
I'm a big fan of the metaverse and AI. While the new Apple Vision Pro and similar headsets use AI to make them better and faster, that's not what I want to focus on with this example.
What I want you to know are the potential privacy side issues relating to this product. If privacy is a concern for more traditional new products like the Vision Pro, it's a good idea to use this real-world example to set the tone moving forward with new AI platforms and services.
The Vision Pro is an impressive leap in spatial computing with its advanced hand and eye tracking controls and seamless integration into Apple's ecosystem, but the device and others like it raise significant privacy concerns. This device tracks and collects extensive personal data, including everything in front of it sitting on your desk, user activity, eye movement, profile information, device details, and environmental context. This data is processed in real-time and, according to my understanding of the TOS and licensing agreements, can be shared not only with Apple but also with third-party companies worldwide. Comparisons to Oculus's older Quest 2's data collection practices (shares 2 billion data points every 20 minutes) suggest that the much newer Vision Pro's data gathering is exponentially higher.
Apple's Terms of Service and Privacy Agreements outline that user data, including app usage, browsing history, and diagnostic data, can be shared with affiliates, service providers, and partners globally. While the technology is groundbreaking, it's crucial for users, especially those in business and law, to review privacy settings and be mindful of their surroundings when using such devices to mitigate potential privacy risks. I wrote more about my Vision Pro privacy concerns (frankly applies to any modern headset) on LinkedIn, "Apple Vision Pro: A Privacy Nightmare in the Making?"
Do This Right Now
Lawyers and law firms must diligently scrutinize the Terms of Service (TOS) and licensing agreements of all existing vendors and new AI services and platforms. It is imperative to understand the terms and conditions and the extent of data sharing involved, both privately and publicly.
Each lawyer should verify the settings on their devices, like their phones and tablets, to ensure that only the intended data is shared with AI service providers. For example, I shared a post on X, (formerly Twitter), about X recently unilaterally activating an option allowing it to “utilize your X posts as well as your user interactions, inputs, and results" with its AI called Grok for training and fine-tuning purposes. Furthermore, the company shared that “your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.”
To navigate these complexities, involving your IT department can provide invaluable assistance. It’s crucial to remember that existing TOS and licensing agreements might already permit such unilateral actions (they're written to try and anticipate the future), underscoring the importance of thorough review and continuous monitoring.
Balancing Innovation with Ethical Considerations
While leveraging AI's benefits, it's essential to consider the ethical implications of its use. Striking a balance between innovation and privacy protection is crucial. Ensuring that AI systems are transparent in their operations is fundamental. Clients should know how their data is being used and for what purposes. Additionally, there must be accountability mechanisms in place. If an AI system makes an error, there should be clear processes for addressing and rectifying the issue.
Developing an AI ethics policy for your firm can provide a structured approach to ethical AI usage. This policy should outline principles for the ethical use of AI, including transparency, accountability, and respect for privacy. Training your staff on this policy ensures consistent and ethical AI practices throughout the organization.
Corporate Boards
In the insightful article "5 AI Risks For Corporate Boards To Examine," attorneys James Gatto and Tina Garbett of Sheppard Mullin, emphasize the importance of proactive governance to navigate these challenges effectively and ensure responsible AI deployment. They address five pivotal AI risks Boards need to consider to ensure responsible and effective AI deployment. In summary they discuss:
Algorithmic Disgorgement: Boards must be vigilant about the potential for algorithmic disgorgement, where companies could be required to surrender profits derived from flawed or biased AI algorithms. This calls for rigorous oversight and continuous monitoring of AI models to ensure ethical standards and fairness.
Tainting of Proprietary Software Developed With AI Code Generators: The use of AI code generators can inadvertently taint proprietary software, raising legal and compliance issues. Boards should implement strict protocols to manage and audit the integration of AI-generated code into their proprietary systems to avoid contamination and protect intellectual property rights.
Inability to Obtain Intellectual Property Protection for AI-Generated Content: AI-generated content often faces challenges in obtaining intellectual property protection. Boards need to stay informed about the evolving legal landscape and advocate for policies that recognize and protect AI-created works, ensuring their company’s innovations are legally safeguarded.
Loss of Valuable Trade Secrets: The use of AI can expose companies to the risk of losing valuable trade secrets. Boards should enforce robust cybersecurity measures and data protection strategies to safeguard their proprietary information against unauthorized access and breaches.
Avoiding Bias and Other Issues On the FTC's Watchlist: Bias in AI systems is a critical concern that has attracted regulatory scrutiny from the FTC. Boards must ensure that their AI systems are regularly audited for bias and comply with regulatory guidelines to avoid legal repercussions and foster trust in their AI applications.
You can read the full article here.
The Future of Privacy and Security
As you can see, looking ahead, there's a lot to take into consideration. The future of AI in the legal profession promises even more exciting advancements. New faster chip technology and quantum computing, for instance, hold the potential to revolutionize AI capabilities with unprecedented computational power. However, it also introduces new security risks, particularly in cryptography. Preparing for this shift involves staying informed about quantum-resistant encryption methods and understanding the potential impacts on AI systems.
The regulatory landscape will also continue to evolve, with more regulations focusing on AI ethics, transparency, and accountability. Engaging in policy discussions and contributing to the development of AI regulations that balance innovation with privacy and security will be essential. Continuous learning and adaptation are key to thriving in an AI-driven legal landscape. Attending AI and cybersecurity seminars, participating in legal tech forums, and staying updated with the latest research are all vital practices for maintaining a competitive edge.
The integration of AI in the legal profession is not just inevitable but also incredibly exciting. By understanding and addressing the privacy and security challenges, you can leverage AI's benefits while protecting your clients' sensitive information and maintaining their trust.
Remember, the key to success in this AI-driven era is a proactive approach. Conduct regular audits, stay informed about legal developments, and implement robust security measures. With these practices, you will not only safeguard privacy but also position yourself at the forefront of legal innovation.
As we move forward, imagine AI not just as a tool but as a double-edged sword, capable of safeguarding privacy while also posing significant threats. Now, brace yourself for an exploration into the world of AI-generated hallucinations and deep fakes—where reality blurs and the stakes for truth and authenticity in the legal realm escalate dramatically.
Picture a courtroom where convincing but entirely fabricated evidence can be created with a few clicks, or where a client’s identity can be hijacked through sophisticated AI mimicking. The next chapter will unravel these challenges, diving into the intricacies of how these digital deceptions are crafted and deployed. We’ll explore real-world cases that highlight the profound implications for justice and ethical practice, and guide you through the strategies to discern and counter these threats. Prepare to navigate this murky digital frontier with clarity and confidence, ensuring that justice remains clear and unclouded.