A BC physiotherapist pastes a patient's clinical note into ChatGPT to get help drafting a progress report. The note has the patient's name, date of birth, claim number, and treatment history.
That data has now left the clinic. It has crossed the Canadian border, entered American servers, become processed by a model owned by a company the patient has never heard of. None of this was disclosed to the patient. None of it is reversible.
This isn't a hypothetical. It happens every day in BC clinics \u2014 by physicians, by physiotherapists, by counsellors, by office managers. Most don't know that pasting into ChatGPT (or Claude, or Gemini) is a regulatory event. The tools' interfaces don't tell them. The convenience is high. The disclosure isn't there.
We built TOSC's apps around a different choice: AI that runs on infrastructure we control, in Canada, that doesn't route patient data through consumer AI providers. This essay is about why that's not a feature \u2014 it's a posture.
What “cloud AI” actually means
When you use ChatGPT, your text travels from your browser to OpenAI's servers in the United States. OpenAI's terms have evolved over time on what happens next: whether the data is used to improve future models, how long it's retained, who at OpenAI can see it, what the breach process looks like. The terms apply at the policy level \u2014 what happens at the data path level is engineered in California, on infrastructure neither you nor your patient can audit.
The same is true for Anthropic, Google, and every other consumer AI service. The privacy posture is “trust us.” For a clinician, that's a hard sell \u2014 because the data isn't theirs to trust on. It belongs to a patient who consented to clinical care, not to AI processing in another jurisdiction.
The PIPA-BC and PIPEDA gap
BC's Personal Information Protection Act and Canada's PIPEDA both predate generative AI. Neither names “AI processing” as a permitted or prohibited use. Provincial privacy commissioners have issued guidance, but the legal lines are still being drawn. The conservative reading \u2014 and the one most counsel will give you \u2014 is that pasting identifiable patient data into a US-hosted AI service is a disclosure that requires patient consent.
Most clinicians using ChatGPT for chart work don't have that consent. They're not malicious; they're trying to save 30 minutes on a Sunday afternoon. But the regulatory exposure is real.
What “local AI” means
“Local” doesn't mean “running on your laptop.” It means: the AI infrastructure is under control by an organisation with a duty to the data, hosted in a jurisdiction with healthcare privacy law, and not in the data path of consumer AI providers.
For TOSC's apps, that means our AI runs on infrastructure we control, in Canada. If we move it to AWS Bedrock in the Canadian region, or to a private GPU cluster in Vancouver, the principle stays the same: our control, Canadian residency, no consumer AI APIs in the path.
What it costs
Local AI is slower than frontier models. Smaller models \u2014 the kind we can host ourselves \u2014 have capability ceilings that GPT-4 and Claude don't. For some tasks (deep clinical reasoning, complex multi-step inference), this matters.
For drafting structured insurance forms in clinician voice, it doesn't. The forms have templates. The narrative is bounded. The clinical content is structured. A small model running on Canadian infrastructure can do this work faster than the clinician would do it by hand \u2014 which is the only speed that actually matters.
What it pays back
Three things.
Trust. A clinician using TOSC's apps doesn't have to think about whether their patient's data is being processed by a consumer AI service. The answer is no. That answer is structural, not conditional on a vendor's TOS revisions.
Regulatory alignment. PIPA-BC, PIPEDA, and the conservative reading of both \u2014 all are easier to satisfy when the data never leaves Canadian-controlled infrastructure.
Visibility into the path. When something goes wrong (a model produces a bad output, a process fails, a security event happens), we have visibility into what happened. With consumer AI services, the data is opaque from the moment it crosses the border.
Why this isn't a feature
A feature is something you market. A posture is something you build everything else around. Local AI is the second.
Every TOSC app, every consulting engagement that touches data, every workflow we design \u2014 they assume the same thing: patient data doesn't leave Canadian-controlled infrastructure, and doesn't get processed by consumer AI providers. We didn't add that posture as a checkbox. We built around it.
This is what we mean when we say “AI we control, in Canada.” It's a small claim with a large structural commitment behind it.
\u2014 Himanshu, for TOSC \u00b7 April 2026