Skip to main content

Improving Developer Experience in OpenEMR

Building Foundations for Human and AI Contributors

As healthcare technology continues to evolve, Electronic Medical Records (EMR) systems like OpenEMR face a critical challenge: how do we make these complex systems more accessible to developers while ensuring quality and security? Over the past few months, I've been working on a comprehensive overhaul of OpenEMR's developer experience, focusing on two key objectives:

  1. Lowering barriers for new human developers to contribute meaningfully to the project
  2. Creating robust feedback mechanisms that will enable quality assurance for AI-generated contributions

The Challenge: A Quarter Century of OpenEMR

OpenEMR carries a storied history spanning more than two decades, through regulatory, technical, stylistic, and cultural change. Its longevity is impressive. Migrations are more expensive in healthcare technology than anywhere else, and OpenEMR's commitment to backwards compatibility has helped it remain feasible for health systems that couldn't afford to stay current with every technological shift.

The goal isn't to keep providers on old versions of OpenEMR forever, but to maintain a viable migration path forward. However, every attempt at modernization necessarily piles onto an ever-increasing maintenance burden. Each new feature, security update, and compatibility layer adds complexity that would strain the human review and QA resources of even a much larger enterprise, much less a not-for-profit open source foundation.

This creates a fundamental challenge: how do you modernize a critical healthcare system without breaking the migration paths that make modernization possible in the first place? The answer, I believe, lies in reducing the maintenance burden through automation—specifically, automating testing, static analysis, and builds before attempting larger structural changes.

OpenEMR has tests, and some build infrastructure, but much of it is slow and too much of it is manual.

Starting with the Foundation: Forward the Delivery Infrastructure

The most impactful changes began with the continuous integration pipeline. I refactored the tests to automatically discover and test all Docker Compose configurations. This means when someone adds a new service or configuration, the testing system automatically picks it up without requiring manual updates to multiple CI files.

I also replaced most of the delays that guessed when mariadb would be ready with the robust healthcheck that knows. This shaves the better part of a minute off each of almost every test configuration.

Now developers get faster, more reliable feedback on their contributions, and the system can automatically validate changes across multiple configurations without anyone having to remember to update a hardcoded list somewhere. More importantly, this creates the foundation for validating AI contributions at scale—when an AI system suggests changes to multiple configurations, we can test them all automatically.

Making Development Environments Predictable

The Docker and development environment improvements tell a similar story. I consolidated multiple scattered Docker Compose files into a coherent system and obsoleted legacy configurations that only served to confuse new contributors. The goal wasn't just cleanup–it was creating predictable, consistent environments where both human developers and AI systems could reliably test their changes.

The OpenEMR Docker Images

OpenEMR Docker images are not built in the openemr/openemr repository, but in a distinct openemr/openemr-devops repository dedicated to tooling that doesn't ship with the EMR software itself. So, the first time changes to those Docker images would have been tested would be in the openemr tests, long after the images were built. Furthermore, the images are complex, both in their construction and in their entrypoint, which itself makes significant changes to the OpenEMR runtime after startup.

So, I added tests to that repo as well, ensuring that OpenEMR doesn't ship with failures that happen before the software tests can even run. I hope this opens the door to progress improving, speeding up, shrinking, and making more secure these images.

Code Coverage

One addition that actually made tests slower was comprehensive code coverage reporting—but it's worth the trade-off. For the first time, we can see exactly what code is – and isn't – being exercised by our test suite. This visibility is crucial for understanding the quality and completeness of our testing, especially as we consider AI-generated contributions that might affect less-tested code paths. Anyone can view the coverage report for any test run by downloading the htmlcov artifact from Actions → Test → Summary → Artifacts in GitHub, making code coverage transparent and accessible to all contributors.

The Human Impact: Faster Feedback, Lower Barriers

These delivery system changes create an improved experience for contributors. There's a lot of low-hanging fruit in a big system like OpenEMR. Now there's less risk that a seemingly small change will cause uncaught big problems. Immediate, actionable feedback is the way. The matrix-based testing means they don't need to understand the full complexity of our test infrastructure—they just need to know that their changes will be tested automatically across all relevant configurations.

For experienced developers, the reduced maintenance overhead is equally important. Adding a new test configuration no longer requires updating multiple files and remembering all the places where hardcoded lists live. The system discovers and tests new configurations automatically, letting developers focus on solving problems rather than maintaining infrastructure.

Setting the Stage for AI Collaboration

But perhaps more importantly, as AI-generated code continues to proliferate, the problem of scaling review becomes more challenging, and automating as much review as possible becomes crucial. These changes improve effective AI collaboration in OpenEMR development. The enhanced CI pipeline can more rapidly validate AI-generated contributions across multiple configurations. The coverage reporting helps identify areas where AI contributions might introduce regressions. The Docker improvements ensure AI-generated code is tested in the same environment it will run in production.

The matrix-based testing infrastructure means AI contributions can be validated at scale without overwhelming human maintainers. When an AI system suggests changes to ten different configurations, we can test them all automatically and provide focused feedback on what works and what doesn't.

Looking Forward: What This Work Enables

This work paves the way for more ambitious improvements. With reliable health checks and streamlined processes, we can focus on optimizing build times. The solid testing foundation enables confident security enhancements. Infrastructure cleanup provides the groundwork for smaller, more secure Docker images. And with reliable CI and comprehensive testing, we can add new features with confidence.

But the real potential lies in what this enables for the future of software development in healthcare. As AI becomes increasingly capable of generating code, we need systems that can quickly and reliably validate those contributions while maintaining the high standards required for healthcare software.

The Bigger Picture: Preparing for Tomorrow's Development

The improvements here made represent more than technical upgrades—they're preparation for a future where human creativity and AI capability work together in healthcare software development. The barriers to meaningful contribution are lower without compromising on quality. The feedback loops are faster without sacrificing thoroughness. The maintenance overhead is reduced without losing functionality.

This foundation makes it possible for both human developers to contribute more easily and for AI systems to receive rapid, reliable feedback on their contributions. Most importantly, it enables maintainers to focus on higher-level architectural decisions rather than fighting infrastructure issues, while ensuring the project can scale to accommodate more contributors without sacrificing the reliability and security that healthcare software demands.

As I continue this work, the next phases will build on this foundation to expand test coverage, improve image build times, and add static analysis features to handle entire categories of problems that can't easily be caught by unit testing. We'll explore how AI and human developers can work together effectively in healthcare software development.

The code and improvements discussed are available in the openemr and openemr-devops repositories on GitHub.

All Articles OpenCoreEMR Announces Private…

The Future of Your Practice Starts Now.

AI-Integrated. 70% lower cost. No compromises.
Get Started Free