With the proliferation of AI through platforms like ChatGPT, many of us have certainly cringed when our clients informed us that they have already researched their legal questions using AI, and that our advice is incorrect. Such client independent research can make doing our jobs as lawyers exponentially harder and frustrating.
By now, we have all read of some lawyer or law firm being sanctioned for using ChatGPT or some other AI platform to prepare pleadings which are never verified for research or factual accuracy. Some unsuspecting lawyers filed those pleadings with courts not taking the time to review and realize that the research cited in the AI generated pleading never even existed—it’s what’s called a “hallucination” or a factual mistake or completely made up response. AI certainly has its place in our law firms for certain tasks, but lawyers can never think or try to allow AI to replace good research, writing, advice and independent judgment.
Indeed, earlier this year a Georgia lawyer who was working on his last case prior to retirement filed an AI generated responsive appellate brief with the Georgia Court of Appeals. Opposing counsel was quick to file a reply brief pointing out to the Court that appellee’s brief was full of inaccuracies including reliance upon cases that did not even exist! The Court sanctioned the lawyer monetarily and referred the matter over to the State Bar of Georgia which initiated disciplinary proceedings against the lawyer—just what you want to face before retiring. The lawyer’s upset client reported the matter to local news and an embarrassing situation grew worse.
Georgia Bar’s Special Committee to Study AI/Tech
Because the State Bar has received a number of disciplinary complaints from the courts and the public alleging Georgia lawyers’ misuse of AI, a Special Committee was recently established. The Artificial Intelligence and Emerging Technology Committee was formed to examine how the Georgia Rules of Professional Conduct cover advancements in technology, particularly AI, and the practice of law. The twenty-seven member committee will specifically look into whether existing Bar Rules and Bar policies adequately address Georgia lawyers’ use of technology and related conduct. Given what they find, I personally hope that the committee will particularly focus on how the State Bar can help fellow Bar members integrate AI and technology advances into their law firms properly. The Special Committee will make recommendations to the Supreme Court of Georgia and the Board of Governors about how the Georgia Rules of Professional Conduct cover developments in tech, AI and the practice of law. To that end, it may be some time before Georgia lawyers learn about those recommendations.
OpenAI Takes Action
In recent days, OpenAI announced a policy revision prohibiting its flagship AI system, ChatGPT, from providing professional advice in domains that require licensing, including legal and medical advice. As members of the Professional Liability & Ethics Protection Program (PLEPP), licensed attorneys in Georgia must take careful note: this policy change highlights both shifting regulatory expectations for AI tools and evolving professional liability risks in our legal practice.
Policy Shift and Regulatory Context
As of October 29, 2025, OpenAI updated its usage policy to clarify that ChatGPT is now meant only as an educational or informational tool—not a substitute for licensed professionals in providing tailored, context-specific advice. The company acknowledged rising concerns about liability and safety, especially where users have come to rely on generative-AI for high-stakes advice. Regulators globally have been tightening oversight on high-risk applications of AI; for example, laws like the EU Artificial Intelligence Act and guidance from U.S. regulatory agencies press providers to put guardrails around sensitive domains.
Implications for Lawyers (and Firm Risk)
As practicing Georgia attorneys, PLEPP members must view this policy shift and the exploration of AI/tech by the Bar’s Special Committee as both a warning and an opportunity. Here are some key takeaways:
- Risk awareness and malpractice exposure:
Even though OpenAI is trying to limit misuse by disclaiming that ChatGPT cannot provide licensed advice, the fact remains that many users (including lawyers, clients or unlicensed operators) might continue feeding legal questions into these AI systems. If lawyers rely on AI outputs and then present them in a matter, it could create confusion about where legal responsibility begins. That confusion can contribute to ethical complaints or malpractice claims.
- Ethical obligations and competence:
Under the Georgia Rules of Professional Conduct, lawyers must provide competent representation (Rule 1.1) and safeguard against giving clients misinformation (Rules 4.1, 4.4 and 8.4). Relying unquestioningly on an AI output may fail the competence standard or create conflict with duties of supervision (Rules 5.2 and 5.3), professional independence (Rule 5.4) and diligence (Rule 1.3). False or nonexistent cases and misstatements of law to a court would also land a lawyer in disciplinary hot water (Rules 3.1 and 3.3).
- Employee education and practice management:
This is an opportune moment to proactively educate associates and staff. Firms should revise disclaimers and intake materials to clarify that while staff may consult AI tools for certain tasks, those tools are not substitutes for your legal judgment and good advice. As part of risk management, firms should create policies that instruct staff not to relay AI outputs as legal advice, require review by firm attorneys/management, and document how AI is used in research or drafting. Indeed, the only potentially reliable AI assisted research tools may be those offered through paid for services like Thomson Reuters-Westlaw, Clio Work, or an equivalent.
- Contractual disclaimers and engagement letters:
You may wish to revisit your engagement agreements to include explicit language about how AI tools may be used (or not used) in delivering legal services. A clause confirming that all advice is provided by a licensed Georgia attorney may help mitigate risk or strengthen a defense if a client claims otherwise.
- Regulatory signaling and precedent:
OpenAI’s policy change is not just a corporate choice — it signals how providers view the boundaries of AI in regulated professional fields. Regulators and bar associations may treat this as a benchmark: other providers may follow, or regulators may expect firms to adapt.
Action Steps for PLEPP Members
| Area | Recommended Action |
| Audit current practice | Review whether staff or attorneys are using AI tools in legal research or drafting. |
| Update policies | Draft or revise an internal policy on AI usage. |
| Engagement letters | Add clear language that advice is provided by a licensed Georgia attorney, not by AI. |
| Client communication | Advise clients that AI is not a substitute for legal advice. |
| Training | Provide CLE or internal training on AI risks and ethics. |
| Documentation | Keep records showing how AI output was incorporated (or not). |
Why This Matters for Malpractice & Ethics Protection
Over twenty-five years of practice has shown me that many risk management problems can be alleviated or avoided entirely with a little proactive action. Indeed, many of the issues I have seen result in grievances and malpractice claims motivated me to make it Chandler Law’s mission to help fellow attorneys try to avoid the angst and ruin such a situation can bring professionally.
I created PLEPP as a community where forward-thinking lawyers can collaborate and learn to better manage the inevitable risk of practice. Within PLEPP, one of our central themes is preventing malpractice claims and disciplinary grievances. There is no doubt that AI is a useful and productive tool the use of which must be monitored carefully within our law firms. The new policy from OpenAI and the Georgia Bar’s new Special Committee underscore the emerging frontier where technology and legal and ethical risk intersect. Even if the provider now prohibits giving legal advice, the use of AI by clients or attorneys will still raise issues: misunderstanding, overreliance, or misapplication of AI outputs will continue to generate malpractice claims or ethics investigations.
As your partner in professional liability & ethics protection, we encourage you to treat this policy change not as a technical footnote, but as a risk signal. Firms that proactively update their policies, educate their employees, and regulate/document the use of AI within their law firms will have better defenses if questions or complaints arise.
