Skip to main content

Choosing ethical AI: What lawyers should expect from their technology providers

February 9, 2026 | 3 min read
Niki Black AI Compliance Ethics

Key takeaways

When evaluating AI tools for your law firm, bar associations emphasize four core compliance requirements:

  1. Data security: Ensure vendors use encryption (TLS 1.2+ in transit, AES-256 at rest) and maintain SOC2 compliance

  2. No model training: Verify that your confidential client data will never be used to train third-party AI models

  3. Human oversight: Maintain attorney responsibility for reviewing and verifying all AI-generated output, including case citations

  4. Ongoing evaluation: Continuously assess AI tools as technology evolves to ensure continued compliance with ethical obligations

In a recent blog post, we shared the AI Principles that guide our team as they expand AI functionality across our software. That post explains the 8am™ approach to protecting the security, data privacy, and professional obligations of our customers, all of which are essential for law firm leaders to consider when making AI implementation decisions. 

For lawyers researching generative AI tools, understanding a trusted company’s AI development philosophy is an important part of ethical compliance. This software will be used daily in your firm, so vetting providers is an essential part of the AI adoption process.

Ethics guidance

In 2026, more than three years after the general release of ChatGPT, there is no shortage of AI adoption guidance. Bar associations across the country have issued ethics opinions that provide a clear path to compliant AI implementation. 

For example, both Texas (Opinion 705) and Oregon (Formal Opinion 2024-205) have addressed the ethical issues lawyers should consider when researching and choosing AI for their firms. Both opinions cover a wide range of topics, including technology competence, the obligation to carefully review AI output, data security requirements, AI model training considerations, and the continuing need to evaluate evolving technology.

Data security is required

Understanding the steps a provider takes to ensure data security is a key part of choosing an AI tool. As explained in the Oregon opinion, “(AI) competence requires understanding the benefits and risks associated with the specific use and type of AI being used.” Lawyers should carefully vet providers to ensure that vendor contracts address how data is protected, including how it will be handled, encrypted, stored,

At 8am, we make that process easy for you. As explained in our AI Principles blog post, data security is a top priority. To ensure that your firm’s data is protected, “we encrypt your data both in transit (using TLS 1.2 or greater) and at rest (using AES-256 encryption), and our AI partner, OpenAI, is SOC2 compliant and maintains enterprise-grade security standards.”

No model training permitted

The duty of confidentiality is paramount, and lawyers must take steps to ensure the security of sensitive client data. The Texas Committee reiterated the importance of carefully questioning AI providers to determine how data entered into AI tools is handled. 

The committee cautioned lawyers about permitting AI tools to train on inputted data, highlighting the importance of understanding how a specific tool works: “The use of such self-learning programs poses a risk that the confidential information a lawyer inputs to the program may be stored within the program and revealed in responses to future inquiries by third parties…The lawyer should be reasonably satisfied that the program will not reveal confidential information to others or permit the use of such information to the disadvantage of the client.”

Rest assured, as explained in our AI principles post, 8am does not support model training, and “your confidential case information will never be used to train third-party AI models.”

The obligation to carefully review output

As explained in 8am’s AI principles, “our AI streamlines routine work and surfaces insights, but your judgment, discretion, and accountability stay exactly where they belong: with you.”

This comports with the Oregon Board of Governors’ recommendations, which confirmed that lawyers have an ethical duty to carefully “supervise the accuracy of all their work product, including that which is produced by AI and the ethical use of AI by subordinate lawyers and nonlawyers.”

The committee determined that, as part of their obligation to be aware of and comply with court orders that require AI disclosure, lawyers should carefully review and verify the accuracy of AI output, including case citations.

Continuing obligation to track technology changes

Finally, AI is advancing quickly, and as a result, software powered by AI is always changing. 8am IQ is regularly updated, as outlined in our AI principles: “We continuously evaluate, refine, and improve our AI systems to ensure they remain accurate, responsible, and aligned with your needs and the needs of your clients.”

Ethics committees have addressed the ethical obligations arising from the ongoing, rapid evolution of these tools. For example, the Texas committee acknowledged that generative AI is ever-changing, and that its guidance is “intended only to provide a snapshot of potential ethical concerns at the moment and a restatement of certain ethical principles for lawyers to use as a guide regardless of where the technology goes.”

Similarly, the Oregon ethics committee emphasized that “Competence is an ongoing obligation…Lawyers must consider and continually evaluate (ethical issues) when determining whether and how to incorporate AI into their practice.”  

The choice is yours

As bar associations continue to clarify how existing ethical rules apply to AI, one message is consistent: Lawyers must be thoughtful, informed, and proactive when adopting new technology. Choosing an AI provider is not just a business decision, but an ethical one that touches confidentiality, competence, and professional judgment. At 8am, our AI Principles are designed to support those obligations by prioritizing security, transparency, and lawyer oversight, so firms can use AI with confidence while staying aligned with their professional responsibilities.


Interested in 8am IQ, the purpose-built AI for legal professionals? See how it helps firms work smarter—explore 8am IQ here.