The Uncomfortable Conversation Has a Missing Chapter

Robert Goldman's recent piece in Clinical Leader names something real. Here's what we'd add.

March 5th, 2026


Robert Goldman wrote a plain-spoken, well-reasoned piece about AI and data use in clinical trials that the industry needed to hear. If you haven't read it, you should. His central argument is that access to study materials does not grant permission to externalize them into third-party systems. He's exactly right, and the governance gap he describes is real.

We'd like to add a chapter.

It's Not Just About AI

Goldman frames the issue around AI tools, and that's a fair framing because AI is what coordinators are reaching for right now. But the underlying problem predates AI and will outlast whatever the next tool category is.

Consider a task management tool adopted at the site level that has no AI, no LLM, no feature more sophisticated than a checklist. A coordinator builds out a board in something like Asana or Notion to track per-patient study tasks: visit windows pulled from the protocol schedule, procedure checklists, query statuses, patient initials tied to visit dates. It's practical, it works, and it almost certainly lives on servers the sponsor has never evaluated. It may lack HIPAA compliance entirely. The sponsor has no visibility that it exists.

The real issue is unqualified third-party software operating on sensitive research data, regardless of whether AI is involved. Goldman's arguments apply equally to any SaaS tool a coordinator adopts because it promises to save time. AI makes the conversation urgent because adoption is happening fast and the consequences of misuse are less visible. But sponsors who solve only the AI problem will leave the broader gap open.

Why AI Is Still Unique

That said, Goldman's focus on AI isn't wrong. AI introduces a specific risk that generic SaaS does not.

Models get good by seeing data. That's not a side effect of how they're built; it's the mechanism. (In nerdier circles, you'd say it's a feature, not a bug.) When you use a tool like ChatGPT, the application has to process everything you submit in order to respond intelligently. But many consumer and business AI tools are also designed to retain that input. Conversations are logged, feedback ratings are collected, and those signals are fed back into training cycles.

The major frontier model providers - OpenAI, Anthropic, Google - have enterprise and business terms of service that explicitly prohibit using customer data for training. That's meaningful. But it still creates a link between your data and the model maker's infrastructure, and it relies on contractual assurance rather than architectural separation.

There's a more complete answer, and it already exists: private AI, sometimes called enterprise AI.

What Private AI Actually Means

Private AI means running AI models inside the vendor's own infrastructure, not sending data outbound to model providers for processing.

Cloud providers like AWS, Azure, and Google Cloud offer hosted versions of the same frontier models (GPT, Claude, Gemini, and others) that operate entirely within a controlled, private environment. A software vendor can build an application that uses the latest and most capable AI models while ensuring that the model maker never sees the underlying data. The model runs inside the vendor's architecture. Data never leaves.

This isn't a theoretical future. It's how well-architected AI applications work today. It's also how HumanTrue is built. From day one, we assumed these questions would come, and we designed accordingly.

Three Things Sponsors Should Ask Every AI Vendor

Goldman calls for sponsors to establish written policies and evaluate vendors before recommending them. We'd make that concrete. Sponsors should require three things:

HIPAA attestation and SOC 2 certification. Not marketing claims – audited, third-party verified certifications. These are table stakes for handling health information in any SaaS context, AI or not.

Demonstration of private AI architecture. Vendors should be able to show, specifically, that AI processing occurs within their own infrastructure and that no study data is transmitted to model providers. "We use enterprise terms of service" is not a sufficient answer. The architectural separation should be documented.

Explicit data use policy covering training, fine-tuning, and distillation. A vendor should be able to state clearly, in writing, that customer data is not used to train, fine-tune, or distill any AI model by the vendor, or by any model provider they work with.

A Thought Worth Finishing

Goldman's article ends with a call for shared expectations before convenience becomes custom. We agree. And there's a question we haven't fully answered ourselves: beyond SOC 2, is there a certification or standard that could specifically address responsible AI use in regulated research contexts?

SOC 2 covers security, availability, and confidentiality of data broadly. It doesn't speak to the specific risks introduced by AI like model training pipelines, data retention for feedback, the distinction between encrypted transit and genuine architectural isolation. Something like a "responsible AI in regulated research" framework that could be audited and third-party verified. This would give sponsors a clear signal beyond vendor self-attestation. Besides, who doesn't like a new TLA (three letter acronym)?

We don't know if that standard exists yet. We think it should, and we'd participate in building it. If you're thinking about the same question, we'd like to hear from you.

HumanTrue's Position

All data uploaded to HumanTrue is encrypted and stored securely in our cloud environment. AI processing occurs entirely within our private architecture. No data is transmitted to model providers. We do not use customer data for training, fine-tuning, or model distillation. You have full control over your data at all times.

Goldman is right: right to access is not consent to use. We built HumanTrue assuming that bar would be applied to us, and we think every vendor in this space should meet it.

Want to learn more?

We'd love to show you how to get started using AI for clinical operations

Request a Demo