Big organisations in information driven industries are gradually moving away from creating software data tools themselves. The task of creating software is getting outsourced to anyone with an interest in it by using an approach referred to as Model Driven Development (MDD). Considered a fanciful idea just a few years ago, MDD has moved from a novel concept to a pragmatic business necessity in large corporations. App store by Apple for iPhones is just one of the examples of the success of this approach. Implementing MDD in healthcare can prevent the sector from spending billions on complex IT projects, while delivering comparable or even better software tools.
The main benefits of such an approach in healthcare could be:
- Reduction in development cost and time
- Interoperability between different software
- Localised specifications to better address complex variable needs
- Better opportunities for clinicians and patients to engage in software development
MDD approach requires the healthcare authorities to look at the healthcare IT as an ecosystem rather than as a project. It hence takes on the responsibility for establishing standards and benchmarking, providing an immense room for policy innovations. MDD software approaches have proven to be inherently adaptable to changing circumstances. This makes these approaches an excellent fit in healthcare settings wherein changes in government, technology or medical knowledge require resilient IT solutions (“future proofing”).
Healthcare IT can benefit by learning from other information driven industries. However in most cases, the project leadership is unable to see the big picture in both IT and health industry. This is because in healthcare, creator of technological and data solutions is not usually the consumer of these tools. To succeed and stay ahead of the curve, healthcare IT would need to foster leadership positions for people uniquely qualified to take on integrative leadership roles, which can facilitate such vision. In the UK, some healthcare authorities have started recruiting Chief Clinical Information Officers (CCIO) to their boards, which is an encouraging step towards this direction. Otherwise we would be perpetuating this strange dichotomy wherein patients and clinicians are technophiles at home with smartphones, but technophobes once inside healthcare settings.
Medical information doubles almost every few years and the rate of production of new medical information is accelerating. Advances in medical knowledge often make established treatment models obsolete. In a typical year, frontline clinical staff would have 22000 new peer-reviewed articles, 30 new drugs and 6000 new combinations of drug compatibilities to consider in addition to their existing knowledge of medicine. The number of drugs has grown 500% in just the last decade and the technological advances in medical imaging are producing more data than ever before for the same procedures (e.g. High Resolution CT Scans). Not only medical data has exploded in recent years, it has also become more accessible to both the patients and providers alike.
Traditionally, meaningful information has been extracted from large sets of data by sampling a representative portion of the data set. Sampling methods, with all their limitations, were essential as we did not have the tools and resources to comprehend the entire datasets. However with newer technology this limitation is rapidly being removed. One such approach, encompassing various techniques commonly referred to as Big Data, is helping information driven industries to analyse entire datasets regardless of their size and scope. At the heart of these new techniques is a simple premise – ‘why analyse a fraction of data when we can analyse everything’. Big data also helps us to move away from the post-hoc statistical analyses which are unable to provide real time measurements. Although other information driven industries have been quicker to adopt big data models, healthcare industry is uniquely placed to benefit the most from it for the following reasons:
- Most healthcare data is recorded and validated.
- It spends billions in research and development.
- Healthcare data is ever increasing, thereby stretching the resources.
- It would help to focus on preventative measures.
Big data models have emerged only in last few years and their growth has been fuelled by internet based companies like Google, Yahoo! and Facebook who face the challenge to meaningfully analyse the datasets generated by billions of individuals.
‘Flu trends’ by Google is just one example wherein the algorithmic analysis of big data sets (i.e. all search queries) is providing almost real-time estimates of current flu activity throughout the world. This online tool, which has been designed by engineers at Google with little background in healthcare, is accurate enough to closely match the official government estimates of flu activity and still has the advantage of spotting the emerging trends 2-4 weeks before healthcare agencies.
This is just one example of the transformative power of big data in healthcare.
One of the core drivers in healthcare industry is the information about health and disease. Clinicians are performing the task of eliciting information about illness from the patients, filtering out the noise from signals and producing actionable information, which leads to interventions. They are also burdened with the task of storing (documenting) and disseminating this information to patients and other relevant healthcare professionals. Some estimates put the time spent by a clinician in eliciting, documenting, storing and disseminating this information as up to 2/3 of their clinical time, with less than 20% actually spent in performing interventions (treatment). The role of information is so central in healthcare that it can be argued that its main business is the business of information. Therefore any factors which can alter the dynamics of information in our society are bound to affect healthcare business. Last two decades have witnessed many fundamental shifts in these dynamics.
If we look beyond the obvious proliferation of technology in the form of round the clock access to information from the internet from mobile computing devices or via online social networking, we would find that the way we consume information is undergoing a fundamental change. As a society, we are rapidly moving away from synchronous interpersonal communication (wherein both the originator and recipient of information have to be in same place at same time) as being our dominant form of communication. This shift towards asynchronous communication is not new, as it has been a part of human civilisation ever since we learned to draw in caves in ancient times. Asynchronous mode of communication became significant after the invention of paper and printing, but internet is giving it the impetus to become the dominant form of communication in human society. This also means that an ever larger amount of information is accessible by ever greater number of individuals.
Fig 1. Medical information triangle: Dominance of the asynchronous flow of information
In healthcare settings it means that the clinician no longer might be the primary information provider to a patient. Today both doctor and the patient have become voracious information consumers, using easily accessible knowledge to inform our decisions in millions of different ways. This shift is gradually changing the public perceptions of healthcare professionals, who are now less valued for their knowledge but more for their skills and experience. With easy access to detailed information, a tech-literate patient could possess more information about his illness than the doctor who had to study several thousand illnesses. However despite that our healthcare IT systems are usually designed to suit 20th century practices i.e. one to one synchronous interactions between the doctors and patients. This approach fails to capitalise on possible advantages offered by the asynchronous communication which has the potential to dramatically increase the reach of individual clinicians. But most importantly it fails to correct one of the most important drawback of the dominance of asynchronous communication, i.e. it is more error prone than the synchronous modes of communications. This is because unlike synchronous means of communications, which has spontaneous feedback loops (doubts can be immediately clarified by interacting parties in real time), feedback loops have to be created in asynchronous information ecosystems.
As an industry we have made little progress in this area, and still treat to keep most of the patient information as the proprietary information of the organisations. By evolving future healthcare systems in the direction of the shift in the flow of health information, the industry can potentially boost the productivity of individual clinicians. Rather than limited by traditional physical case load limits of a few hundred patients, a future clinician can practice on an unprecedented scale when aided by the right technology and policies.
Healthcare data is mainly collected and stored in the following three separate data pools:
- Clinical data
- Financial data
- Research data
Because the responsibility for collecting, storing and utilising this data rests with different individuals or institutions (clinical – hospitals, clinics; financial – managers, governments; research – universities, pharmaceutical companies), they remain largely isolated from each other with little interconnectivity. The disparate systems on which these datasets are located are typically unsuitable for complex integrative analysis. This disconnect between healthcare data can be detrimental to the early identification of important healthcare trends or adverse events, as highlighted by the withdrawal of a popular pain relief drug rofecoxib in 2004.
Rofecoxib was approved by US Food and Drug Administration in 1999 and gained widespread acceptance amongst physicians worldwide who prescribed it to over 80 million people worldwide. In 2004, a California-based integrated managed-care consortium Kaiser Permanente connected clinical and financial data to compare the risk of adverse cardiovascular events for users of rofecoxib against a similar drug; it found that rofecoxib might have been responsible for more than 27,000 avoidable myocardial infarction (heart attack) and sudden cardiac deaths between 1999 and 2003. This study led to a voluntary withdrawal of the drug from the market. Interestingly between 1999 and 2004, similar conclusions were suggested by a number of small scale studies, however none was considered large enough to raise sufficient concerns. The simple act of combining clinical and financial data provided the crucial research dataset that was required to trigger one of the largest medication withdrawals in history.
Although the above example is a powerful indicator of the potential benefits of having an integrated approach on healthcare data, large scale implementation of such approaches have proven to be challenging and therefore they has remained underutilised. In the UK, implementation of Payments by Results (PbR) in National Health Service to integrate clinical outcomes with financial remuneration has produced mixed results. Such approaches usually require a fundamental reorganisation of the industry processes and support by technology appropriate innovations in policy.
One such example is the recently launched government funded Secure Unified Research Environment (SURE) project in Australia, which aims to overcome such limitations by providing a central datacenter where researchers can form connections between data sources and access the necessary computing power required to perform such analysis. In its short span of active operation, researchers using this integrated database have been able to confirm the intuitive beliefs that the older Australians are more likely to have higher consistency of care, and that lower consistency of care is associated with geographical remoteness. It also led to a counter-intuitive discovery that wealthier and more highly educated Australians have a lower consistency of care.
It is important to note that although researchers were able to test intuitive beliefs using more complex and time consuming methods before the existence of SURE database, counter-intuitive discoveries would not have been possible by looking at a single dataset alone.