Over the past few years I have been facing people with two extreme views - one view is that 'standard is magic', the other extreme view is that 'standards never work'.
I am not going to address the second one here, since I concluded that it is primarily caused by the people who is holding the first view, as long as we address the root cause of the first view, hopefully we are able to help the second group of people see the benefits of standards.
To avoid confusion and set the context correct, the HIT standards I am talking about here is interoperability standards such as HL7v2, HL7v3 RIM, CDA etc, excluding terminology.
For those people who has the view of "standard solves everything" tends to have the following characteristics
1. Lacks the mindset and experience of architecture design
2. Theoretical view of the world out there, and pursue the dream of 'silver bullet'
3. Not Invented Here (NIH) Syndrome
Let me share with you few real conversations I had in the past. Once I was asked about how we know that the interoperability standard developed is ready for implementation. Guess what is my reply, my simple suggestion is the following question,
"Do you know HOW to implement this standard if you are the one to implement?"
I continued to say that if the answer is "No", then I am 100% sure that there must be something very wrong with it even before I look at it.
In another situation, I was asked how the standard can help to churn good data out of low quality data, that's really amusing! How can it be possible that interoperability standard itself turn bad data into good data? If some data is missing from the received message, yes, the receiving system may be able to figure out the missing data, but that is not going to be solved by standards itself even if it is achievable.
For people who holds the view of 'silver bullet', they tends to misplace the use or application of one particular technology or standard, take the case of ISO13606 or openEHR, which I find it very useful for clinical requirement definition as I commented on this blog - Complexity of Standards, however if we are going to turn that standard into a universal data exchange and OLTP/OLAP model, that's where standard scares away the people who want to adopt standards in the first place. This kind of debate is just as useless as arguing 'whether to use SQL or MapReduce', the right answer shall be 'Use the technology where it thrives'.
The last one - NIH syndrome, it is so pervasive, different people do have different views and opinions, I guess everyone wants to create his/her own thing so that we can leave some legacy behind, I am not exception. This can only be solved by the organization which shall enforce the rule of putting personal views and opinions behind organization's and extended community's collective objective. If you do not use the standards the industry already developed, and continue to enhance and extend, then what is the point of developing standards in the first place? It defeats the whole purpose of using standards.
So how to equip yourself with the architectural mindset to solve the first two problems mentioned above?
Lets look at the definition of an architecture used in ANSI/IEEE Std 1471-2000:
"The fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution."
The most difficult part of architecture design is reflected in the second part of the above definition, the principles governing its design and evolution to ensure that architecture's agility - extensibility, configurability, customizabillty, etc. It is extremely easy to design an architecture or information model without considering its evolution, however this kind of system is extremely fragile, I think this point will somehow relate to some of the projects you are involved, and you might be smiling. If so, please continue to read.
So how to ensure the architecture's agility?
1) Why? We shall ask the WHY question before you develop anything, whether it is interoperability standard or a piece of software, it is all about design. Only when you ask WHY, you will be able to uncover the real objectives. The tendency is that users give very vague requirement, and sometimes they are just clueless.
2) Anticipate! With the correct understanding of objectives, you will perform some level of predictive analysis. How far and deep you can predict is based on your experience, that's why architecture design is an art to some extent.
3) Tradeoff. Architecture design is all about trade-off, it is the design of different choices, you can use ATAM (Architecture Tradeoff Analysis Method) to help you on that.
4) Outcome. We shall think about outcome before we start to develop anything.
5) Engineering. We need to have strong engineering discipline to do the work. I have mentioned this point in my previous post - Standards based Implementation
To conclude, interoperability standards development is architecture design activity, treat it seriously, realisticly and holistically
If you are in the middle of developing your own backyard standards, think about twice here! Are you on the right path?
Showing posts with label ISO 13606. Show all posts
Showing posts with label ISO 13606. Show all posts
Tuesday, May 22, 2012
Wednesday, August 24, 2011
The fallacy of constraining superset model
In HL7v3, though the overall development process is from RIM --> DMIM + CMET ---> RMIM,it not strictly enforced in the tooling. eg.after we developed DMIM for Lab domain, when we develop lab result specific RMIM, the RMIM is not strictly constrained from DMIM, rather than we use DMIM as base line, and then add or remove elements as appropriate or sometimes we directly develop RMIM from RIM. So it is not really a problem in HL7v3 though overall process is "design by constrains" but the actual content of the information model is not constrained from superset information model since RMIM is not strictly constrained from DMIM.
In other models such as openEHR and ISO13606 where it uses the archetype concept to develop re-usable data structure for all use cases, and then extend or constrain the archetypes for a specific use case requirement, there we will see the fallacy.
This idea of developing superset model is quite attractive at first glance, however in reality it becomes a burden and result in lot of issues and challenges at implementation, let me quote my original comment below to explain the fallacy.
Firstly, I found it is not useful at implementation level. For example the UV or even national level model defines 200 or more data elements with a lot of optional, yet the project specific requirement needs only 20 data elements, in this case the big model will likely confuse the implementer since he/she needs to fully understand what's the exact business use cases and relevance of all these other optional data elements and the potential impact to the project, and if the user does not really understand all these optional data elements, he/she will not be able to constrain the big model in the first place.
Secondly in order to address all possible requirements, the modeling process will be extremely long, in the end the modeler may just simply dump all the business data requirement he/she can think of without proper modeling rigorousness under time constraints or due to unclear use cases, and when use cases become clearer, the model needs refactoring. The model under this kind of development process, in software programming space, we call it "spaghetti code" - not sustainable code which makes the system extremely fragile and under constant refactoring whenever there is slight new or additional requirements. The other practical guidance is that why having the all the trouble to satisfy the 20 percent or even less of the needs at the sacrifice of the majority 80 percent or more needs? In the end those 20 percent or less needs is also not fully addressed.
Thirdly, from technical implement point of view, particularly web service implementation, the strategy for ensuring payload backward compatibility is to ensure the XML structure and data type expansion rather than contraction, e.g in the existing XML payload, we can define data type of one data element is "integer" since all existing systems are using integer, later when a new system requires it to be "string", you can safely expand the data type to "string" since type expansion won't break backward compatibility (for incoming request, not for outgoing response).Similar to data type, XML structure expansion is safer than contraction in ensuring backward compatibility.so with this technical reason and limitation, the modeling also should not try to be big fat one with loosely defined requirement, instead the model shall try to start with small and current known requirement, and evolve between different release.
In other models such as openEHR and ISO13606 where it uses the archetype concept to develop re-usable data structure for all use cases, and then extend or constrain the archetypes for a specific use case requirement, there we will see the fallacy.
This idea of developing superset model is quite attractive at first glance, however in reality it becomes a burden and result in lot of issues and challenges at implementation, let me quote my original comment below to explain the fallacy.
Firstly, I found it is not useful at implementation level. For example the UV or even national level model defines 200 or more data elements with a lot of optional, yet the project specific requirement needs only 20 data elements, in this case the big model will likely confuse the implementer since he/she needs to fully understand what's the exact business use cases and relevance of all these other optional data elements and the potential impact to the project, and if the user does not really understand all these optional data elements, he/she will not be able to constrain the big model in the first place.
Secondly in order to address all possible requirements, the modeling process will be extremely long, in the end the modeler may just simply dump all the business data requirement he/she can think of without proper modeling rigorousness under time constraints or due to unclear use cases, and when use cases become clearer, the model needs refactoring. The model under this kind of development process, in software programming space, we call it "spaghetti code" - not sustainable code which makes the system extremely fragile and under constant refactoring whenever there is slight new or additional requirements. The other practical guidance is that why having the all the trouble to satisfy the 20 percent or even less of the needs at the sacrifice of the majority 80 percent or more needs? In the end those 20 percent or less needs is also not fully addressed.
Thirdly, from technical implement point of view, particularly web service implementation, the strategy for ensuring payload backward compatibility is to ensure the XML structure and data type expansion rather than contraction, e.g in the existing XML payload, we can define data type of one data element is "integer" since all existing systems are using integer, later when a new system requires it to be "string", you can safely expand the data type to "string" since type expansion won't break backward compatibility (for incoming request, not for outgoing response).Similar to data type, XML structure expansion is safer than contraction in ensuring backward compatibility.so with this technical reason and limitation, the modeling also should not try to be big fat one with loosely defined requirement, instead the model shall try to start with small and current known requirement, and evolve between different release.
Subscribe to:
Posts (Atom)
HL7 FHIR APIs can fundamentally transform the rapid development of frontend web application
Just imagine if all your backend APIs is based on HL7 FHIR API, how it will fundamentally transform your frontend web application developme...