Disclaimer: The opinions expressed in this article are those of the author and do not necessarily represent the views of Atorus Research or PHUSE. References supporting the opinions presented are included in the References section. 

I am an AI skeptic. Not the hand-wringing, sci-fi kind who worries about sentient machines. The kind who looks at how AI is actually being used right now and finds it mostly underwhelming, frequently reckless, and genuinely harmful in ways that don’t get enough attention.

Environmentally, AI data centers are consuming enormous amounts of clean water and energy, and accelerating dependence on fossil fuels.1 Socially, it’s being weaponized for disinformation and abuse. Financially, it’s driving up the cost of electronics, electricity, and computing infrastructure for everyone.

And yet, it’s here. My organization is using it. Yours probably is too or will be soon.2 So the question isn’t whether to use AI. It’s how to use it without compromising your skills, your data integrity, or your sanity.

This presentation, which won Best in Connect at PHUSE US Connect 2026, is my attempt to answer that question honestly.3

What AI Actually Is (and Isn’t)

AI is good at pattern recognition. It is not intelligent. It cannot think. It will recognize patterns in your prompt, compile something from its training data, and return a response based on those patterns. For programming, that training data is largely open-source code from the internet, which means it may be outdated, deprecated, or contain security vulnerabilities. It also means AI is good at syntax and bad at assumptions. If your prompt leaves room for interpretation, the AI will fill that gap with a guess, and the guess will often be wrong.

Where AI Can Actually Help Clinical Programmers

There are three areas where I think AI provides genuine value without creating more problems than it solves:

  • As a learning tool. If you’re transitioning from SAS® to R, AI can be a useful starting point. Give it some SAS® code and ask it to write the R equivalent. It’s not perfect (I’ve had to iterate and correct the output before the results matched), but for a programmer who already understands what the code should do, it can accelerate the learning process significantly
  • For commenting and documentation. Programmers don’t comment code well. It’s a near-universal failure. AI can fill that gap effectively. I’ve fed uncommented R code from our open-source library into an AI and received back reasonably accurate, useful comments. You still need to read through it with a critical eye, but it’s a genuine time-saver
  • For debugging and unit testing. AI is a master of syntax, and most bugs are syntax errors. It can search for similar errors across the internet and suggest solutions. It can write unit tests to verify that your code handles varying inputs correctly. If you haven’t adopted unit testing in your clinical programming workflows, I’d encourage you to do so regardless of whether you use AI to help

Four Tips for Using AI Safely in Clinical Programming

  1. Do not put proprietary code or patient data into a non-secure AI chatbot. AIs don’t forget. Data submitted to cloud-hosted AI tools may be retained, processed on external servers, or inadvertently exposed. Perform a risk-based analysis of any AI system before using it with anything sensitive.
  2. It’s all about the prompt. Be specific. Leave no room for assumptions. If you find yourself making assumptions when you read your own prompt, the AI will miss them. For large code blocks, chunk them into smaller pieces, as AI gets confused by long code and will skip over sections.
  3. Use a consistent style guide. The more consistent your code is across users and projects, the better the AI’s output will conform to your expectations. You can include the style guide directly in your prompt or provide the AI with a URL.
  4. Validate everything. The AI is only as good a programmer as you are. If you can’t follow and understand the code it generates, it’s probably wrong. Go through it line by line. Ask: What assumptions did the AI make? What edge cases might it have missed? Is the code efficient for large datasets? Combining AI’s pattern recognition with your own debugging knowledge is where the real value lives.

The Bottom Line

AI is a tool. Like a chainsaw or a hammer, it can do real damage in the wrong hands. But used correctly (i.e., with appropriate skepticism, specific prompts, organizational guardrails, and your own trained eye on every output), it can save time in the right places without eroding the skills that make you valuable. Studies show AI doesn’t reliably improve productivity for experienced developers and may actively erode foundational skills in less experienced ones.4,5 So, use it as a tool, not a replacement.

You don’t have to be enthusiastic about it. You just have to know how to use it.

I teach a full three-hour AI for programming course through Atorus Academy for teams who want to go deeper on this. If your organization is navigating these questions, it’s worth the time.

Frequently Asked Questions

Yes, but with significant caveats. AI can produce syntactically correct code for familiar patterns, but it requires a skilled programmer to verify every line. It will make assumptions, miss edge cases, and occasionally produce plausible-looking but incorrect output.

Only if the tool is deployed in a secure, validated environment where proprietary code and patient data are protected. Non-secure public AI chatbots should never receive proprietary code or clinical data.

Pattern recognition, syntax, and repetitive tasks: code translation (e.g., SAS® to R), adding comments and documentation to existing code, identifying syntax errors, and drafting unit tests.

Making correct assumptions, reasoning about edge cases, handling large code blocks reliably, and understanding domain-specific context that isn’t explicitly provided in the prompt.

Be specific, eliminate any assumptions you’d have to make yourself, provide context including libraries and frameworks, and chunk large code into smaller pieces. Treat it like a conversation by iterating and giving feedback on the output.

It can, especially for junior developers who rely on AI before building foundational understanding. The safest approach is to use AI as a tool that accelerates work you already understand, not a replacement for developing that understanding.

About the Author

Frank

Frank Menius is a Senior Trainer at Atorus Research and a Data Science and Data Standards subject matter expert with nine years of statistical programming and clinical research experience. A graduate of the University of North Carolina and North Carolina State University (applied statistics and data management), Frank brings a uniquely practical perspective to clinical programming — including a previous career in law enforcement. He teaches a full three-hour AI for programming course through Atorus Academy.

References 

1 Menius, F. (2026). A Skeptic’s Guide to Using AI for Programming. Paper PD12, PHUSE US Connect 2026. Best in Connect Award recipient. Atorus Research. 

2 O’Donnell, J. (2025). We did the math on AI’s energy footprint. MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/ 

3 METR. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ 

4 Yepis, E. (2025). Developers remain willing but reluctant to use AI: The 2025 Developer Survey results are here. Stack Overflow. https://stackoverflow.blog/2025/12/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here/ 

5 Goel, N. (2025). New Junior Developers Can’t Actually Code. https://nmn.gl/blog/ai-and-learning 

Back to Blog