I created open source FHIR server in Java back in 2013, and commit the code in Sourceforge at this URL - https://sourceforge.net/p/hl7-fhir/code/ci/master/tree/# (Note: Interestingly it is even earlier than HAPI FHIR server which started in Feb 2014)
However as our company's primary tech stack is Microsoft .NET, so I did not really make further update and commit.
Recently as we are trying to make HL7 FHIR as national standards for Singapore, I started to think about to resume the FHIR server implementation in Java again. One way is to use HAPI FHIR open source code which I also shared in my post - https://healthinterconnect.blogspot.com/2024/09/hl7-fhir-difficult-to-implement.html
However with the recent massive development of GenAI, MCP and Skills, I realised maybe the best way to start another FHIR Java server is to completely re-write, but not to write code, instead my effort shall be spent primarily to provide design ideas to AI, let AI come up the implementation, and continuously review and update the implementation plan.
Go to this repo - https://github.com/sg-victorchai/fhir4java-agents to check out the implementation plan, you can submit your review and comments if there is any.
Hopefully within short time, we can successfully run a very robust enterprise level FHIR API services
Below para describes the 7 design steps I have collaborated with Claude Code
1. Initial design requirements
I need to create a project which can fulfill the following purposes
1.The project shall be Java application using the latest Java 25 with JPA and the database shall be using PostgreSQL. Both applications and database shall run in docker container
2.The java application will expose API services for any CRUD and extended operation to support HL7 FHIR. It needs to support both HL7 FHIR defined resources in both R4B and R5, and any custom
resources defined by project team
3.The API services shall provide number of plugins for audit log, telemetry,end-to-end performance tracking of each internal steps for the API services. It also needs to support business logic
plugin for project team to implement the buisness workflow
4.Internally, the API services shall perform these validations - For create/update operation, it shall validate the payload against defined HL7 FHIR profile which is specified in FHIR
StructureDefintion resource; for search operation, it shall validate whether the search parameters are in the list of search parameters defined for the resource; for extended operations, it shall
validate whether the operation is defined in OperationDefintion
5. The API services shall validate whether the interaction is enabled or diaabled before it serves the request
6. The API services needs to be configuration driven, for any new resources and new extended operation, project team just needs to add new resource or new extended operations without code chanage
7. For extended operation, the API service internally perform patch operation to update the database records, or perform search if the extended operation is to search records from datbabase
8. For database design, all the audit log and perofrmamce tracking, it shall be created in its own database schema. For all the resources including HL7 FHIR defined resources and custom resources,
it shall provide configuraiton to allow project team to decide whether the resource data is stored in its own dedicated database schema, or stored in databse scheam shared with other resources
9. The java application can make use of some common libraries in HAPI FHIR server for FHIR compliance, object serialization and de-serializaiotn, and validation
2. Corrected Claude on Java version
User answered Claude's questions:
⎿ · Java 25 is not yet released (current latest is Java 23). Which Java version would you like to use? → according to openjdk web site, java 25 LTS is available, can you check again
· How should the plugin system load custom business logic plugins? → Spring Bean-based (Recommended)
· How should custom FHIR resources be defined? → Configuration files defined in FHIR StructureDefinition json files loaded at startup
· What web framework would you prefer for the REST API layer? → Spring Boot (Recommended)
3. For each FHIR resource search parameters, request Claude to update the plan to use JSON instead of YAML, and ask it to reference HAPI FHIR server.
Tool use rejected with user message: for the search parameters specified in the resource configuration file such as patient.yml, pls do not specify the search parameters in patient.yml, instead please
follow these instructions - 1. create resource specific SearchParamConfig in fhir-config/searchparameters folder; 2. There shall be one base search parameters defined for HL7 FHIR common search
parameters, and another for resource specific search parameters. you can take reference for the complete FHIR server's CapabilityStatement resource. 3. The server shall also expose FHIR server's
CapabilityStatement availabe at /metadata path, one example is HAPI server https://hapi.fhir.org/baseR5/metadata
4. Added further design requirements on the plugin design
Tool use rejected with user message: need to further validate two areas. 1. for the plugin, can you elaborate the detail design, how the API service will handle the excution of the series of plugin -
from audit log, telemetry and performance tracking, and buisness logic plugin. also there might be 1 or many business logic plugin. how to handle this part. for the technical plugin, we also need to
support future plugin for authentication and authorisation. lastly, the plugin needs to support synchronous or asynchronous. 2. pls also include plugin to support cache, for the cache provider, pls
support both in-memory cache for dev testing and redis cache for other purposes
5. Added further design requirements on the audit log design
Tool use rejected with user message: few more areas to be refined. 1. for the audit log, we need to ensure it captures the resource id and version for both before and after for update; for
read/create/delete, record the specific resource id and vesion; for search, it needs to record the search parameters; for extended operation, it needs to redcord the operation name and parameters
provided. for the payload provided for each interaction, pls store in a separate table. 2. for telemetry, it needs to support OTEL; 3. the telemery plugin shall also support different levels of
tracing such as just record the different steps executed for the API services, or include the detail SQL statement executed. 3. In terms of persistence layer, please also provide option so that the
actual data is persisted outside of the system instead of internal database. Pls update the implementation plan
6. Added further design requirements on the data persistence design to support both internal database persistence or external data repository
Tool use rejected with user message: for external persistence option, pls provide these capabilities. 1. each resource can be configured either internal databse or external persistence. 2 for external
persistence, pls store the data in internal database first, and then trigger external service. if external service failed, then provide option to rollback the changes for create/update. Pls update
the implementation plan
7. Added further design requirements to the plugin architecture to support native MCP support
for the plugin design, can you explore the posssibility to use MCP protocol?
With the below, I concluded today's design collaboration with Claude Code
⏺ Thank you! It's been a great session. Here's a summary of what we accomplished:
GitHub Repository: https://github.com/sg-victorchai/fhir4java-agents
Commits:
1. 3b11a78 - Initial implementation plan (2,651 lines)
2. e2a23ee - README with project overview
3. 9c16a53 - MCP plugin architecture (+305 lines)
4. 89eb890 - Updated README with hybrid plugin & AI self-discovery
Key Design Decisions Documented:
- Java 25 LTS + Spring Boot + PostgreSQL + Docker
- FHIR R4B/R5 with HAPI FHIR libraries
- Hybrid plugin architecture (Spring Bean + MCP)
- AI self-discovery of plugins via MCP protocol
- OTEL telemetry with configurable tracing levels
- Enhanced audit logging with before/after versioning
- Write-through persistence with rollback capability
- Configuration-driven resources and operations
When you return, just reference this conversation or the implementation plan in your repo. Enjoy, and see you next time!
No comments:
Post a Comment