In an era defined by rapid innovations, data integration comes to the forefront as a key pillar, especially in healthcare. This blend of technology and medicine opens a time rich with unprecedented opportunities and challenges. Data engineering lies at the core of this continuous digital transformation, harmonizing the ever-expanding volume and intricacy of healthcare data with the invaluable insights it holds.
Challenges in Data Standardization
Healthtech faces multifaceted challenges with data at various stages, from the initial data capture at the point of care to its operational management. A myriad of source systems, including patient intake procedures, laboratory information systems, IoT devices, and EMRs, continuously generate and supply data, often resulting in a fragmented data landscape. As this data flows further through pipelines, it typically barely meets only the most basic standards, contributing to the clutter. Consequently, this results in a data environment fraught with critical gaps and inconsistencies.
“One of the big challenges within healthcare is that all that clinical data and patient data is fragmented and sitting in siloed different databases within a hospital organization, within a clinical provider. This poses a challenge for the clinician unable to be able to see the full scope of what’s going on with the patient because they have to gather data from multiple different sources within the organization.” – said Peter Shen, Head of Digital and Automation in North America from Siemens, on VMware CIO podcast.
The FHIR standard, despite being in the spotlight for over two decades, still faces adoption challenges. While FHIR offers interoperability and the promise of utilizing advanced technologies like AI to improve diagnosis accuracy, enhance patient care, actively leverage data-driven insights, and predict diseases, its implementation comes with its own set of complexities.
Roadblocks on the Way to Interoperability
Only 56% of the industry is leveraging the full power of digital transformation. Meanwhile, it’s estimated that the other 44% are experiencing some $342 billion in lost revenue as a result of their reluctance.
In addition to the data challenges inherent in health tech, such as security and privacy concerns, three distinct challenges emerge when it comes to adopting FHIR: legacy system integration, the need for standardization and semantic interoperability, and organizational readiness.
Legacy System Integration
While some organizations have already invested in modern systems that offer data export in FHIR-compatible formats through APIs, it’s essential to acknowledge that many clinics still rely on older systems without APIs. Their best option might be providing data in the form of CSV files uploaded via SFTP, and in some cases, they even use fax machines.
Every clinic has its own healthcare record management system, and the way these systems store data can vary significantly. While most rely on relational databases, their data exchange capabilities are limited. In some instances, developers within clinics resort to writing SQL queries to extract data directly from databases.
Standardization and Semantic Interoperability
At present, health information exchange and data interoperability primarily rely on documents. Whether transmitted via fax, email, or electronically, providers typically select specific data and generate a message containing only that data. Given the diversity in data storage methods, their nature, and sources, ensuring that the data’s meaning remains intact during transfer between systems presents a formidable challenge.
Semantic interoperability goes beyond mere system mappings; it aims to ensure that healthcare data is not only reliably transmitted but also easily understandable. Achieving healthcare interoperability often hinges on human decision-making. For instance, an engineer may determine how to establish a mapping between two health systems using different data formats for communication. Consequently, discrepancies can emerge across a chain of interconnected systems.
Organizational Readiness
Let’s envision this scenario: a clinic employs two different systems for distinct operational aspects. Patients, doctors, and encounters remain consistent across both systems, necessitating synchronized records. However, a challenge arises when one system provides data in CCDA format while the other uses HL7. The requirement is to unify all data into FHIR for transmission to an analytical dashboard, enabling doctors and personnel to make well-informed decisions.
Standardizing data across thousands of healthcare facilities presents a monumental challenge. When executives embark on digital transformation initiatives for their organization, their primary concern is assessing the completeness of data in the clinics they intend to connect before converting it to FHIR. FHIR mandates specific data points that legacy systems may lack, compelling executive teams to decide how to supplement this data. Finding a scalable solution is no easy feat. Preserving legacy historical data becomes imperative, regardless of any new data standards the organization adopts. This legacy data can assume various forms, and healthcare data standards continue to evolve.
Looking for Solution
Data arrives in formats that may or may not adhere to established standards, such as HL7 or CCDA. These formats can vary widely and aren’t limited to conventional structures like CSVs. When the target system demands data in a specific format, such as FHIR, the challenge lies in transforming this unpredictable data zoo into it.
Two primary options emerge to address this challenge. The first option involves hiring developers and data analysts to handle each new data format that may arise, ensuring that this team continuously updates connectors. The second option is to invest in a data onboarding tool like Datuum, which automates data integration, leading to cost savings and a reduction in human errors.
Datuum’s built-in AI engine intelligently comprehends and maps data from various sources to the destination schema, preventing structural and semantic discrepancies. This accelerates data analyst tasks, enabling them to focus on crucial tasks, leading to more effective budget allocation and faster data onboarding.