AI at work: what is safe to share, what is not
Customer data, source code, contracts, secrets. The practical rules for using AI without leaking what your employer cares about.
Most people are pasting things into ChatGPT at work that they would never email to a stranger. The boss says "use AI." The security team says nothing. So you're left guessing whether the customer call notes you just summarized are now sitting in a vendor's training set.
This is the quiet privacy problem of 2026. Nobody is naming it because everyone is busy shipping.
Here's a friend's take, not a compliance memo.
The coffee shop wall test
Before you paste anything, ask yourself one question. Would I be comfortable writing this on the chalkboard wall of a coffee shop where strangers can read it?
If yes, paste it anywhere. Free ChatGPT, Claude, whatever. The model can train on it, log it, leak it, and you don't care because there was nothing to leak.
If no, the next four sections matter.
The four categories of risky paste
1. Customer data. Names, emails, contracts, support tickets, anything tied to a real person who didn't sign up for their info to land at OpenAI. Default: don't paste it. If you genuinely have to, anonymize first. Find/replace real names with "Customer A" and "Customer B" before the paste, scrub email addresses, swap company names for "Acme." Or use only the enterprise/business AI plan your company pays for, which has data controls and a contract that says the vendor won't train on your inputs.
2. Source code and IP. Your codebase, internal architecture docs, design files, the deck for next quarter. Default depends on your plan. Free ChatGPT, free Claude, free Gemini all train on your inputs unless you explicitly turn it off. Their paid Team and Enterprise plans don't. Knowing which tier you're on is the entire game here. If you don't know, assume you're on free.
3. Contracts and legal documents. This is its own category because the lawyers care even when nothing technically leaks. A counterparty's name in a vendor log is a problem your GC will surface six months later. Anonymize counterparty names before pasting, redact financial terms, or use a legal-specific tool like Harvey if your firm has it. If your firm doesn't have one and you're regularly working with contracts, ask. The answer is usually yes.
4. Secrets. API keys, passwords, OAuth tokens, the JWT you copied from devtools to debug something. Never. Not even on enterprise plans. Models can be prompted to recall things they've seen, and a copy of your secret ends up in vendor logs even when it's nowhere near training data. Rotate any key you accidentally paste, today, before you finish reading this.
Free vs paid vs enterprise, in plain English
Free plans almost always train on your inputs. Some have changed recently, most still do by default. Treat free as "public."
Paid Team plans (ChatGPT Team, Claude Team, Gemini for Workspace) typically don't train on your data. You get a contract that says so. This is the minimum tier for real work.
Enterprise plans add audit logs, data residency, SSO, admin controls. This is what your security team wants.
If your company hasn't bought you the right tier and you're doing real work on a personal account, ask. Frame it as "the alternative is I do this on my personal ChatGPT, which is worse for everyone." Most companies say yes inside a week. The ones that say no have given you useful information about what you should and shouldn't do at work.
The opt-out toggle
ChatGPT and Claude both have an "improve the model for everyone" setting buried in preferences. Turn it off. On ChatGPT it's under Data Controls. On Claude it's in the privacy section of settings. This doesn't fix the customer-data problem (the data is still on a vendor's server, just not in training), but it reduces the spread.
Five minutes. Do it tonight.
What your security team probably wishes you knew
DLP tools (data loss prevention) exist and your IT team can already see which URLs you visit from your work laptop, including chat.openai.com and claude.ai. If you paste a customer record into your personal ChatGPT account on your work machine, that's the same risk profile as forwarding company data to your personal Gmail. The fact that AI feels like a private conversation in your head is the trap. It isn't private. It's a chat with a vendor who logs everything.
The good news: your security team would rather you use a sanctioned enterprise tool than get cute. Most are starved for the conversation. Send them a one-line Slack: "Hey, what's our policy on AI tools for work data?" If they don't have one, you've just done them a favor.
Closer
That's the last guide in Pillar 1, Get Started. You now know what an LLM is, how to prompt one, and what not to paste into one.
Pillar 2, Use Cases by Role, picks up the obvious next question: now that you can use these tools safely, what should you actually use them for. Pick the playbook that matches your job and start there.