The Doctronic website says, “I’m your AI refill doctor.”
The fine print on the same page says: “Doctronic is an AI doctor, not a licensed doctor, does not practice medicine, and does not provide medical advice or patient care.”
Both of those sentences coexist on the Doctronic website right now. This is either a remarkable achievement in self-contradiction, or it is the most honest summary available of what is actually happening in Utah, which is that a chatbot is processing prescriptions, and the people responsible for the legal architecture of the thing are not entirely sure whether to say so.
What Actually Happened
On January 6, 2026, the State of Utah, via its dystopically-named Office of Artificial Intelligence Policy and a company called Doctronic entered into an agreement to allow an AI system to process certain prescription refills for Utah residents whose medications were already prescribed by a licensed physician. The company calls the refill process a “lower risk patient encounter.”
But that’s not what the Utah Medical Licensing Board is calling it. Very interestingly, the board was not consulted prior to the launch of the program. Medical Licensing Board members found out after the program went live. Their April 20, 2026, letter to the Office of Artificial Intelligence Policy called for an immediate suspension. The Medical Licensing Board chair told the AI office director at a public meeting: “I’m just very afraid that nothing’s going to happen until we have some deaths.”
The Utah Department of Commerce, which oversees the AI office, rejected the suspension request. It dismissed the board’s concerns as appearing to be “rooted in misperceptions that do not align with the reality of the pilot’s operations”, whatever that means.
The Regulatory Vacuum
Clinical AI has traditionally been regulated at the federal level as “Software as a Medical Device” under FDA jurisdiction. That framework was built for narrow, task-specific algorithms such as a radiology tool that reads CT scans, or a glucose monitor that alerts on dangerous readings. It was not built for generative AI systems that can interpret labs, adjust dosing guidance, and process prescriptions across 190 drug categories.
The FDA has not approved a single generative AI model for clinical use. It has not commented publicly on the Utah pilot. Doctronic, by its own account, hadn’t communicated with the FDA when the program launched. Meanwhile, there is a bill pending in Congress to amend the Food, Drug, and Cosmetic Act to allow AI systems to prescribe medications with state approval. That bill hasn’t made it out of committee. But, of course, that begs the question of whether, in the absence of such legislation, the FDA Act currently authorizes AI prescribing.
Under the theory that prescription authority is a matter of state medical practice law, Utah’s Office of AI Policy—not the Medical Licensing Board, not the legislature, not a health agency—temporarily waived the applicable regulations for Doctronic. The Medical Licensing Board was informed after implementation, the way you are informed about a dinner reservation someone else made for you.
The Supervision Fiction
The state’s defense of the program rests on physician oversight. In Phase One of the pilot, every AI-generated refill decision is reviewed by a licensed physician before it’s sent to a pharmacy. This, the Department of Commerce says, means the program is “already operating safely at the standard of care.”
But several things are worth noting about this oversight.
The physicians doing the reviewing are not the patients’ physicians. They are Doctronic’s physicians. The individual patient’s treating physician was not asked about this. When the Medical Licensing Board asked whether individual physicians could opt out, the AI office director told them that it was a “direct-to-consumer offering”: the patient’s choice, not the physician’s. You are not part of the equation after your initial prescription.
Then there is Phase Two, the stated next step in the program, in which physician review moves to after the fact. AI processes refills; physicians, that is, some other physicians, review after the prescriptions are filled. That is the direction of travel. Oversight is a transitional feature, not a design principle.
We Have Seen This Movie Before
If you have been in medicine for any length of time, you’ve watched the scope-of-practice debate unfold in slow motion. The argument is always structurally the same: this category of task is “routine.” Physicians are overtrained for it. A shorter training pathway with appropriate “supervision” produces equivalent outcomes. However, tasks that qualify as routine expand over time. Supervision erodes.
First it was physician assistants. Then nurse practitioners. Then the phrase “provider” arrived to smooth over the distinction between a physician and anyone else delivering medical services. Each step was sold on access, on cost, on physician burden. Each step included supervision requirements that grew nominal as the political and commercial momentum behind the expansion became its own justification.
What is new about the Utah pilot is not the argument. It is the speed, and the identity of the regulator. A commerce department, not a health agency, is making medical practice decisions. The FDA is silent. The state Medical Licensing Board was not consulted. The entity that drew the line between “administrative” and “clinical” and decided that prescription refills land on the administrative side is an office whose stated mission is to remove regulatory barriers for AI companies.
Who draws the line matters as much as where the line is drawn.
The Question Nobody Has Answered
If an AI-processed refill causes harm, a drug interaction the algorithm missed, a patient who needed reassessment before continuing a medication, who’s liable?
The AI company? The reviewing Doctronic physician, who may be processing hundreds of refills a day? The state, which waived the regulations that would otherwise have governed this? The patient, who “chose” a direct-to-consumer service?
No one has answered this question on the record. The Medical Licensing Board asked it. The response they received was, in essence, that the program is in Phase One and no serious safety incidents have occurred. Which is not an answer to the liability question. It is a statement that the question has not yet been truly answered.
Sooner or later, it will be.
Some Timely Takeaways For You
- The regulatory vacuum is not temporary. The FDA is signaling a hands-off posture toward AI. State legislatures are moving faster than health agencies. Utah has a statutory sandbox that allows its AI office to waive medical practice regulations for private companies without consulting the medical board. Other states are watching. The body that will ultimately regulate clinical AI has not been determined. That ambiguity benefits companies and their political partners. It does not benefit patients, and it does not benefit physicians.
- “Physician oversight” means what the contract says it means. If you’re in any arrangement, or your patients are using any service, where AI-assisted clinical functions are described as “physician-supervised,” get the definition. Who are the physicians? What are they actually reviewing? What is the review timeline? Is review prospective or retrospective? What happens in Phase Two?
- The “routine” boundary will move. It always does. Prescription refills were chosen because they are somewhat defensible as lower-risk. The programs that follow will be chosen because the refill precedent has been established. Watch where the line is drawn today and assume it will not stay there.
- Liability does not disappear because no one currently owns it. When the AI system produces an adverse outcome, plaintiff’s counsel will sue everyone in the chain. Whether that includes the patient’s treating physician will depend on facts no one has thought through yet. This is worth thinking through before the facts exist.
- The argument that eroded physician authority over scope-of-practice is the same argument being made for AI. You know how that story ends. The question is what you do differently this time.
If you’d like to discuss how the Utah pilot’s regulatory structure and liability questions may eventually affect your group or your practice, no matter what state you’re in, reach out.


