Medical information doubles almost every few years and the rate of production of new medical information is accelerating. Advances in medical knowledge often make established treatment models obsolete. In a typical year, frontline clinical staff would have 22000 new peer-reviewed articles, 30 new drugs and 6000 new combinations of drug compatibilities to consider in addition to their existing knowledge of medicine. The number of drugs has grown 500% in just the last decade and the technological advances in medical imaging are producing more data than ever before for the same procedures (e.g. High Resolution CT Scans). Not only medical data has exploded in recent years, it has also become more accessible to both the patients and providers alike.
Traditionally, meaningful information has been extracted from large sets of data by sampling a representative portion of the data set. Sampling methods, with all their limitations, were essential as we did not have the tools and resources to comprehend the entire datasets. However with newer technology this limitation is rapidly being removed. One such approach, encompassing various techniques commonly referred to as Big Data, is helping information driven industries to analyse entire datasets regardless of their size and scope. At the heart of these new techniques is a simple premise – ‘why analyse a fraction of data when we can analyse everything’. Big data also helps us to move away from the post-hoc statistical analyses which are unable to provide real time measurements. Although other information driven industries have been quicker to adopt big data models, healthcare industry is uniquely placed to benefit the most from it for the following reasons:
- Most healthcare data is recorded and validated.
- It spends billions in research and development.
- Healthcare data is ever increasing, thereby stretching the resources.
- It would help to focus on preventative measures.
Big data models have emerged only in last few years and their growth has been fuelled by internet based companies like Google, Yahoo! and Facebook who face the challenge to meaningfully analyse the datasets generated by billions of individuals.
‘Flu trends’ by Google is just one example wherein the algorithmic analysis of big data sets (i.e. all search queries) is providing almost real-time estimates of current flu activity throughout the world. This online tool, which has been designed by engineers at Google with little background in healthcare, is accurate enough to closely match the official government estimates of flu activity and still has the advantage of spotting the emerging trends 2-4 weeks before healthcare agencies.
This is just one example of the transformative power of big data in healthcare.1The content of this post was later included in a chapter in the following publication: Tyagi, H. (2013). Health data technologies: the current challenges. In NEXUS STRATEGIC PARTNERSHIPS (Ed.), Commonwealth Health Partnerships. London: Nexus Strategic Partnerships for the Commonwealth Secretariat.
Also published on Medium.
footnotes [ + ]
|1.||↑||The content of this post was later included in a chapter in the following publication: Tyagi, H. (2013). Health data technologies: the current challenges. In NEXUS STRATEGIC PARTNERSHIPS (Ed.), Commonwealth Health Partnerships. London: Nexus Strategic Partnerships for the Commonwealth Secretariat.|