Using AI in Client Work
When Should You Consider Disclosure
AI is no longer a new guest in the consulting room, it’s the furniture. Whether you’re using ChatGPT for market synthesis, Claude for technical drafting, or Midjourney for graphics, these tools are becoming integral to operations.
As with most technology, the products first become imbedded into processes, and then the legal and professional frameworks are developed.
The question isn’t just “Are you using AI to supplement or enhance your work?” anymore.
It’s: “Who is liable if the AI gets it wrong, and who owns the result?”
And these questions/issues are not just popping up for my tech and AI focused clients, it’s been across the board from marketing to retail and music. As your lawyer friend, I want to provide you with the general framework on how to think about disclosure and exposure, to start to plan and use AI for your business in a safer more strategic manner.
AI Use: Tool vs. Creator
If using an AI product is not core to your business or service the first step is determining how the AI is being incorporated into your workflow and end product. Here’s a basic list with some use case examples:
Limited to Basics: grammar, formatting, basic research, code debugging
No Disclosure Necessary (in most cases)
Like spellcheck or Excel, these are efficiency tools that don’t replace your judgment
As a Collaborator: formatting a deck, synthesizing and summarizing notes, drafting initial outlines
Discretionary Disclosure
You are still the primary author; AI is just the sous-chef, so depending on the level you may want to disclose
AI as the Lead Architect: generating core strategies, final deliverables, or unique visual assets
Mandatory Disclosure
If the AI creates the substance, the client needs to know for IP and liability reasons.
“Everyone’s Doing It” is not a Legal Defense
I still hear client’s say, “My customer knows I use AI; they use it too!”
But general awareness is not the same as risk alignment. Even if a client uses AI internally, they are paying you for professional expertise and crucially for the risk you assume. If your AI-generated advice leads to a $1M loss, the client won’t care that “everyone uses it.” They will look at your contract.
The 3 Big Risks to Look Out For
1. The Hallucination Liability
In the early days, a hallucination (when AI makes things up) was a silly quirk. Today, it’s professional negligence. If you pass off AI-generated data as factual without verification, you may be responsible for the fallout.
Notes: Never let AI have the final word. If you didn’t verify it, don’t send it out.
2. The IP Black Hole
Copyright law remains firm: AI-generated content without significant human authorship cannot be copyrighted. If your client expects to own a trademark or a proprietary methodology you built 90% with the help of an LLM platform, they might actually own... nothing. You must disclose AI use if the client’s ability to protect the work is at risk.
Notes: This has been a big focus area for marketing clients, so be sure there is significant human authorship if the goal is owning/protecting the IP output.
3. Data Privacy and Shadow AI
The biggest risk isn’t the output; it’s the input. Uploading a client’s sensitive PII (Personally Identifiable Information), Confidential Information or trade secrets into a standard (non-Enterprise) AI account can be a breach of your NDA and data privacy laws like GDPR or CCPA.
Notes: Proceed with caution, even when using an enterprise account, nothing is safe!
Contract Language: Keep it Simple
You really don’t need complex T&Cs or overbearing AI processes and procedures. You just need to set the boundaries of the sandbox, both internally and externally and to be accurate in your disclosures.
Here’s an example of a clause I’ve seen used in agreements recently:
“Consultant utilizes AI-assisted tools to enhance efficiency and analysis. Consultant maintains human oversight over all outputs, remains responsible for the accuracy of final deliverables, and warrants that all core IP remains the property of the Client as defined in Section [X].”
Practical Rules of Thumb
Use Enterprise-Grade Tools: Only use Team or Enterprise tiers for client work. These usually guarantee that your data isn’t used to train the model.
If your client discovered you used AI for a specific task, would they feel cheated or impressed? If “cheated,” disclose or do the work yourself.
Check your contract carefully and confirm that there are no AI use limitations.
Lawyer Friend Takeaways
Use AI as a productivity tool, not as a full on replacement for your work (not yet anyway). The human aspect and interaction is crucial, use your judgment and skill, because that is why you got the engagement.
Treat the outputs, as if they are coming from a very knowledgeable and efficient, but occasionally dishonest, intern. In utilizing it personally, it is clear that not all outputs or LLMs are created equally.
Be sure to verify the outputs, they are not always correct and remember they are scraping data from the internet, not from an oracle.
Protect data (especially confidential info) at all costs. The prompt you input into the chatbot feels private, but in most cases it is not.
Final Thought: Transparency with regard to AI use isn’t just about being honest, it’s about protecting your brand. When you are aligned with your clients about your process, you build trust, and concealment can create liability.
Want more practical thoughts and legal strategy, follow my Lawyer Friend series on Substack:
Disclaimer: This post is for informational purposes only and reflects the personal views of the author. It does not constitute legal advice, does not create an attorney-client relationship, and should not be relied upon as a substitute for consultation with qualified legal counsel. The content may be considered attorney advertising in some jurisdictions, including New York and Connecticut. Prior results do not guarantee a similar outcome.


