Ebook: Health Informatics Meets eHealth
Progress in medicine has traditionally relied heavily on classical research pathways involving randomized clinical trials (RCTs) to establish reliable evidence for any given therapeutic intervention. However, not only are RCTs lengthy and expensive, they have a number of other disadvantages, including the fact that they are currently failing to keep pace with the number of potential innovative treatment options being developed, particularly in areas such as rare diseases. With the vast amount of data increasingly available for use in profiling patient characteristics and establishing correlations between outcomes and potential predictors, predictive modeling may offer a potential solution to the limitations of RCTs.
This book presents the proceedings of the 2016 Health Informatics meets eHealth conference, held in Vienna, Austria in May 2016. The conference provides a platform for researchers, practitioners, decision makers and vendors to discuss innovative health informatics and eHealth solutions with a view to improving the quality, efficacy and efficiency of healthcare. The theme of the conference is Predictive Modeling in Healthcare.
Covering subjects as diverse as fall-detection in the elderly, diabetes, physiotherapy and pediatric oncology, this book will be of interest to all those working in the field of (e)healthcare and its delivery.
Predictive modeling in health care – from prediction to prevention For a number of decades, progress in medicine has relied on classical research pathways, which at some point involve the conducting of clinical trials. Ideally, a number of randomized clinical trials (RCTs) can be conducted and referred to in order to establish a high level of evidence for a given therapeutic intervention, be it a drug to be prescribed to the patient, a medical device to be used, or a particular health care process to be set-up.
RCTs are doubtless a very sound methodology for establishing both efficacy and safety, as well as cause-effect relationship, in an unambiguous way, but this process also has its limitations. Firstly, RCTs are expensive and lengthy procedures. At present, it seems that the number of potential innovative treatment options is developing much faster than can be assessed by a classical RCT approach in a reasonable amount of time. This is particularly the case in areas such as rare diseases. Secondly RCTs usually focus on a well characterized group of patients by means of proper inclusion and exclusion criteria. This can, however, significantly reduce the generalizability of the results to other groups such as the elderly, children, or patients with several co-morbidities. Thirdly, and most importantly, the trend towards precise and personalized medicine aims at the optimal approach for an individual patient, which presents a particular challenge for the classical approach to establishing evidence.
This is where predictive modeling comes in to play. Today, partly due to the increasingly digital nature of healthcare, vast amounts of data are available which can be used as a basis for profiling patient characteristics, to find out correlations between outcomes and potential predictors, and to predict future developments based on a plethora of existing cases.
In order to approach the concept of a truly personalized and preventive care plan tailored to the particular and complex needs of the individual patient, we need to include various sources of information, and predict the future course based on patient-specific models, as illustrated in Fig. 1. Ideally, such a model would not just tell us what could potentially happen to the health status of the individual, but also which options could best influence and modulate the person's situation so as to prevent unwanted events.
The question mark on the right hand side of Fig. 1 indicates that this is not an exhaustive list. There are many more potential sources of valuable information about an individual patient and his/her context which – taken into account – can help to tailor a healthcare strategy to the patient's particular needs.
Of course, correlations cannot be used to directly establish cause-effect relations, but what finally counts is whether a data driven approach works better than, or at least as well as, the state-of-the-art approach. And machine learning approaches usually improve when the amount of available data increases. This reflects the way in which humans become more experienced the more cases they have seen and managed. When it comes to “Big Data” environments however, the human mind faces limits in scalability which predictive modeling tools can potentially help to overcome.
Today, most predictive modeling approaches target non-medical goals in the fields of logistics and resource allocation, not least because predictive modeling concepts with a direct impact on patient treatment face huge regulatory hurdles. This seems to be a major factor in explaining why, so far, most of the data-driven predictive approaches directly related to patient care remain at a research level and in a retrospective setting. Only a few have made it into daily routine – but hasn't this initially been the case with almost all healthcare innovations? Nevertheless, we thought that now was the right time to make this topic “Predictive modeling in health care – from prediction to prevention” our theme for the 2016 issue of the annual scientific conference on “Health Informatics meets eHealth” in Vienna. It is time to give these new possibilities some additional visibility in the framework of the realm of eHealth as a whole.
Günter Schreier
Elske Ammenwerth
Alexander Hörbst
Dieter Hayn
March 19, 2016
Graz, Hall in Tirol, and Vienna
Decision trees (DTs) are one of the most popular techniques for learning classification systems, especially when it comes to learning from discrete examples. In real world, many data occurred in a fuzzy form. Hence a DT must be able to deal with such fuzzy data. In fact, integrating fuzzy logic when dealing with imprecise and uncertain data allows reducing uncertainty and providing the ability to model fine knowledge details. In this paper, a fuzzy decision tree (FDT) algorithm was applied on a dataset extracted from the ANS (Autonomic Nervous System) unit of the Moroccan university hospital Avicenne. This unit is specialized on performing several dynamic tests to diagnose patients with autonomic disorder and suggest them the appropriate treatment. A set of fuzzy classifiers were generated using FID 3.4. The error rates of the generated FDTs were calculated to measure their performances. Moreover, a comparison between the error rates obtained using crisp and FDTs was carried out and has proved that the results of FDTs were better than those obtained using crisp DTs.
Research in blood transfusions mainly focuses on Donor Blood Management, including donation, screening, storage and transport. However, the last years saw an increasing interest in recipient related optimizations, i.e. Patient Blood Management (PBM). Although PBM already aims at reducing transfusion rates by pre- and intra-surgical optimization, there is still a high potential of improvement on an individual level. The present paper investigates the feasibility of predicting blood transfusions needs based on datasets from various treatment phases, using data which have been collected in two previous studies. Results indicate that prediction of blood transfusions can be further improved by predictive modelling including individual pre-surgical parameters. This also allows to identify the main predictors influencing transfusion practice. If confirmed in a prospective dataset, these or similar predictive methods could be a valuable tool to support PBM with the ultimate goal to reduce costs and improve patient outcomes.
The management of diabetic retinopathy, a frequent ophthalmological manifestation of diabetes mellitus, consists of regular examinations and a standardized, manual classification of disease severity, which is used to recommend re-examination intervals. To evaluate the feasibility and safety of implementing automated, guideline-based diabetic retinopathy (DR) grading into clinical routine by applying established clinical decision support (CDS) technology. We compared manual with automated classification that was generated using medical documentation and an Arden server with a specific medical logic module. Of 7169 included eyes, 47% (n=3373) showed inter-method classification agreement, specifically 29.4% in mild DR, 38.3% in moderate DR, 27.6% in severe DR, and 65.7% in proliferative DR. We demonstrate that the implementation of a CDS system for automated disease severity classification in diabetic retinopathy is feasible but also that, due to the highly individual nature of medical documentation, certain important criteria for the used electronic health record system need to be met in order to achieve reliable results.
This paper will discuss the assessment of the use of the LACE tool at North York General Hospital (NYGH). The LACE tool estimates the readmission risk of patients. This paper describes the tool and a modified LACE score implementation and use at NYGH. We also describe our statistical analysis for the LACE effectiveness in order to inform future decisions in resource allocations. We will look at suggestions for adjustments in the way the LACE tool is used as well as implications for service delivery and patients' quality of life. Our study shows that the modified LACE is a predictive tool for readmission risk in day-to-day hospital activity, but that implementation of LACE alone cannot reduce readmission rates unless coupled with efforts of those in charge of providing community-based care.
Data from two contexts, i.e. the European Unresectable Neuroblastoma (EUNB) clinical trial and results from comparative genomic hybridisation (CGH) analyses from corresponding tumour samples shall be provided to existing repositories for secondary use. Utilizing the European Unified Patient IDentity Management (EUPID) as developed in the course of the ENCCA project, the following processes were applied to the data: standardization (providing interoperability), pseudonymization (generating distinct but linkable pseudonyms for both contexts), and linking both data sources. The applied procedures resulted in a joined dataset that did not contain any identifiers that would allow to backtrack the records to either data sources. This provided a high degree of privacy to the involved patients as required by data protection regulations, without preventing proper analysis.
With the Directive 2011/24/EU on patients' rights in cross-border healthcare and the related delegated decisions, the European Commission defined a legal framework on how healthcare shall be organised by European Union (EU) member states (MS) where patients can move beyond the borders of their home country. Among other aspects, Article 12 of the directive is concerned with supporting MS with the development of so called European Reference Networks (ERN), dedicated to the treatment of “patients with a medical condition requiring a particular concentration of expertise in medical domains where expertise is rare”. In the “European Expert Paediatric Oncology Reference Network for Diagnostics and Treatment” (ExPO-r-Net) project, the establishment of such an ERN in the domain of Paediatric Oncology is currently piloted. The present paper describes the high level use cases, the main requirements and a corresponding interoperability architecture capable to serve as the necessary IT platform to facilitate cross-border health data exchange.
Background: The quality of samples stored within a biobank relies on the specimen collection, the transportation, the pre-analytical processing and the long-term storage. Standard Operating Procedures (SOPs) are essential tools to guarantee the quality of samples.
Objectives: The aim of this paper is to present an IT-supported tool (Pre-An Evaluation Tool) that allows assessing the compliance of current pre-analytical procedures (defined in SOPs) of a biobank with international guidelines. The Pre-An Evaluation Tool was implemented based on CEN technical specifications for pre-analytical procedures using REDCap.
Results: The data collection instrument of the Pre-An Evaluation tool consists of more than 250 items related to the CEN technical specifications. In order to create a dynamic questionnaire, items following a branching logic were implemented.
Conclusion: The Pre-An Evaluation tool is a user-friendly tool that facilitates the assessment of the coverage of the CEN technical specifications by specific SOPs. This tool can help to identify gaps within SOPs and therefore contribute to the overall quality of biological samples stored within a biobank.
Background: Clinical information is often used for biomedical research. Data warehouses can help providing researchers with data and the opportunity to find eligible participants for clinical trials.
Objectives: To define an information platform for healthcare and biomedical research based on requirements by clinicians and researchers.
Methods: Interviews with clinicians, researchers, data privacy officers, IT and hospital administration combined with a questionnaire sent to 60 medical departments at our hospital were conducted.
Results: Resulting requirements were grouped and a platform architecture was designed based on the requirements.
Conclusion: Requirements lead to a single platform supporting both, patient care and biomedical research.
Standard-based integration and semantic enrichment of clinical data originating from electronic medical records has shown to be critical to enable secondary use. To facilitate the utilization of semantic technologies on clinical data, we introduce a methodology to enable automated transformation of openEHR-based data to Web Ontology Language (OWL) individuals. To test the correctness of the implementation, de-identified data of 229 patients of the pediatric intensive care unit of Hannover Medical School has been transformed into 2.983.436 individuals. Querying of the resulting ontology for symptoms of the systemic inflammatory response syndrome (SIRS) yielded the same result set as a SQL query on an openEHR-based clinical data repository.
Automatic information extraction of medical concepts and classification with semantic standards from medical reports is useful for standardization and for clinical research. This paper presents an approach for an UMLS concept extraction with a customized natural language processing pipeline for German clinical notes using Apache cTAKES. The objectives are, to test the natural language processing tool for German language if it is suitable to identify UMLS concepts and map these with SNOMED-CT. The German UMLS database and German OpenNLP models extended the natural language processing pipeline, so the pipeline can normalize to domain ontologies such as SNOMED-CT using the German concepts. For testing, the ShARe/CLEF eHealth 2013 training dataset translated into German was used. The implemented algorithms are tested with a set of 199 German reports, obtaining a result of average 0.36 F1 measure without German stemming, pre- and post-processing of the reports.
Current systems that target Patient Safety (PS) like mandatory reporting systems and specific vigilance reporting systems share the same information types but are not interoperable. Ten years ago, WHO embarked on an international project to standardize quality management information systems for PS. The goal is to support interoperability between different systems in a country and to expand international sharing of data on quality and safety management particularly for less developed countries. Two approaches have been used: (i) a bottom-up one starting with existing national PS reporting and international or national vigilance systems, and (ii) a top-down approach that uses the Patient Safety Categorial Structure (PS-CAST) and the Basic Formal Ontology (BFO) upper level ontology versions 1 and 2. The output is currently tested as an integrated information system for quality and PS management in four WHO member states.
Increasingly, critical incident reports are used as a means to increase patient safety and quality of care. The entire potential of these sources of experiential knowledge remains often unconsidered since retrieval and analysis is difficult and time-consuming, and the reporting systems often do not provide support for these tasks. The objective of this paper is to identify potential use cases for automatic methods that analyse critical incident reports. In more detail, we will describe how faceted search could offer an intuitive retrieval of critical incident reports and how text mining could support in analysing relations among events. To realise an automated analysis, natural language processing needs to be applied. Therefore, we analyse the language of critical incident reports and derive requirements towards automatic processing methods. We learned that there is a huge potential for an automatic analysis of incident reports, but there are still challenges to be solved.
The vast amount of clinical data in electronic health records constitutes a great potential for secondary use. However, most of this content consists of unstructured or semi-structured texts, which is difficult to process. Several challenges are still pending: medical language idiosyncrasies in different natural languages, and the large variety of medical terminology systems. In this paper we present SEMCARE, a European initiative designed to minimize these problems by providing a multi-lingual platform (English, German, and Dutch) that allows users to express complex queries and obtain relevant search results from clinical texts. SEMCARE is based on a selection of adapted biomedical terminologies, together with Apache UIMA and Apache Solr as open source state-of-the-art natural language pipeline and indexing technologies. SEMCARE has been deployed and is currently being tested at three medical institutions in the UK, Austria, and the Netherlands, showing promising results in a cardiology use case.
The Austrian electronic health record (EHR) system ELGA went live in December 2016. It is a document oriented EHR system and is based on the HL7 Clinical Document Architecture (CDA). The HL7 Fast Healthcare Interoperability Resources (FHIR) is a relatively new standard that combines the advantages of HL7 messages and CDA Documents. In order to offer easier access to information stored in ELGA we present a method based on adapted FHIR resources to map CDA documents to FHIR resources. A proof-of-concept tool using Java, the open-source FHIR framework HAPI-FHIR and publicly available FHIR servers was created to evaluate the presented mapping. In contrast to other approaches the close resemblance of the mapping file to the FHIR specification allows existing FHIR infrastructure to be reused. In order to reduce information overload and facilitate the access to CDA documents, FHIR could offer a standardized way to query CDA data on a fine granular base in Austria.
Clinical decision support systems (CDSS) are developed to facilitate physicians' decision making, particularly for complex, oncological diseases. Access to relevant patient specific information from electronic health records (EHR) is limited to the structure and transmission formats in the respective hospital information system. We propose a system-architecture for a standardized access to patient specific information for a CDSS for laryngeal cancer. Following the idea of a CDSS using Bayesian Networks, we developed an architecture concept applying clinical standards. We recommend the application of Arden Syntax for the definition and processing of needed medical knowledge and clinical information, as well as the use of HL7 FHIR to identify the relevant data elements in an EHR to increase the interoperability the CDSS.
Background: Providing healthcare professionals with adequate access to well-filled electronic patient records and health-related information contributes to an improvement of the treatment process. Compared to conventional Electronic Medical Records these EHR systems commonly contain more medical artifacts due to their cross institutional, multipurpose use cases. Physicians and health professionals are therefore concerned about information overflow.
Objectives: Goal of this paper is to elaborate new concepts for the automated aggregation of a fully-structured patient summary document based on information extracted from documents which are published in large-scale EHRs.
Methods: The first step was the conduction of semi-structured group interviews with experts and customers. This was followed by a qualitative literature analysis. Consequently technical and medical standards in the field of interoperability were screened.
Results: The result of this paper is the elaboration of an architectural approach to integrate an automatic patient summary creation into well established workflows of large ehealth projects based on standard IHE XDS infrastructures, taking the Austrian ELGA as an example.
Internet technologies and services impose global information standards in the sphere of healthcare as a whole, which are then implied and applied in the domain of cytology laboratories. Web-based operations form a significant operating segment of any contemporary cytology laboratory as they enable operations by the use of technology, which is usually free of the restrictions imposed by the traditional way of business (geographic area and narrow localisation of activities). In their operations, almost all healthcare organisations currently create and use electronic data anddocuments, which can originate both inside and outside the organisation. An enormous amount of information thus used and exchanged may be processed timely and in a high-quality way only by integrated information systems, given three basic safety requirements: data confidentiality, integrity and availability. In the Republic of Croatia, integration of private and public healthcare information systems has been ongoing for several years but the private healthcare does not yet operate as an integrated system. Instead, each office operates using its own separate information system, i.e. database. This paper elaborates the argument that the sample private cytology laboratory possesses an IT system that meets current market and stakeholder needs of the healthcare sector in Croatia, given that private doctors' offices/polyclinics use IT technologies in their operations but make only partial use of Internet capacities in the segment of communication with their business associates and patients, implying the need to continue the research on a statistically relevant sample of EU countries.
Background: Today's high quality healthcare delivery strongly relies on efficient electronic health records (EHR). These EHR systems or in general healthcare IT-systems are usually developed in a static manner according to a given workflow. Hence, they are not flexible enough to enable access to EHR data and to execute individual actions within a consultation.
Objectives: This paper reports on requirements pointed by experts in the domain of diabetes mellitus to design a system for supporting dynamic workflows to serve personalization within a medical activity.
Methods: Requirements were collected by means of expert interviews. These interviews completed a conducted triangulation approach, aimed to gather requirements for workflow-based EHR interactions. The data from the interviews was analyzed through a qualitative approach resulting in a set of requirements enhancing EHR functionality from the user's perspective.
Results: Requirements were classified according to four different categorizations: (1) process-related requirements, (2) information needs, (3) required functions, (4) non-functional requirements.
Conclusion: Workflow related requirements were identified which should be considered when developing and deploying EHR systems.
The living environments of senior citizens are gaining in complexity with regard to health, mobility, information, support and behaviour. The development of Ambient Assisted Living (AAL) services in order to reduce this complexity is becoming increasingly important. The question is: What relevant criteria support the development, measurement and evaluation of business models of hybrid AAL services which have to be considered in an appropriate Performance Measurement Set? Within the EU funded research project DALIA (Assistant for Daily Life Activities at Home) a Service Performance Measurement Criteria (SPMC) Set has been developed and described. With the help of literature review and expert interviews relevant performance criteria were identified and described in the context of Analytic Hierarchy Process (AHP). In conjunction with an AAL business models scanning, a set of performance measurement criteria could be created.
Discussion: The development and application of a specific AAL SPMC Set offers the possibility in a targeted and conceptual way advance the development of marketable AAL services. Here it will be important to integrate with software support the SPMC Set in the service development process of future marketable AAL applications. With the application of an adjusted AAL Service Performance Measurement Cube, the conceptual development of marketable AAL services can be maintained and relevant decisions can be supported.
Background: Electronic case report forms (eCRF) play a key role in medical studies and medical data registries (MDR) for clinical research. The creation of suitable eCRFs is a challenging and yet a very individual task.
Objectives: We plan to create evidence-based templates for eCRFs which are aligned with existing studies or MDRs.
Methods: In this paper we investigated existing standards for eCRFs, defined uses cases and derived requirements needed to identify and annotate data items and pertinent information within study protocols, literature or patient cases.
Results: In order to establish evidence-based eCRF templates based on annotated text documents, a standard-based, hierarchical structure with linkage to existing data repositories needs to be modeled. Standards like ISO/IEC 11179 provide a necessary base, which needs to be extended with proper linking functions.
Conclusion: Linking evidence-based sources with eCRFs allows creating templates, which could be used to define eCRFs for new clinical studies or even compare studies among them. As a next step, the derived requirements from this paper will be used to establish an ontology-based structure for annotating existing text-documents with eCRF data elements.
Background: The burden of cardiovascular disease (CVD) among New Zealand (NZ) indigenous people (Māori) is well recognized. A major challenge to CVD risk management is to improve adherence to long-term medications.
Objectives: To elicit patients' and providers' perspectives on how to support Māori with high CVD risk and low medication adherence to achieve better adherence.
Methods: Analysis of electronic health records (EHR) of four NZ general practices identified medication adherence status of Māori patients with high CVD risk (≥15%, 5-year). A random sample of these patients participated in focus group discussions on barriers to long-term medication adherence. Their primary care providers also participated in separate focus groups on the same topic.
Results: A range of factors are identified influencing adherence behaviour, including patient's medication knowledge, patient-doctor communication effectiveness and cost.
Conclusion: Analysis of barriers to medication adherence in primary care suggests opportunities for health information technology to improve adherence, including patient education, decision support, clinician training and self-service facilities.
The current healthcare system requires more effective management. New media and technology are supposed to support the demands of the current healthcare system. By the example of physiotherapy, the primary objective of this study was to define the specific requirements of therapists on the practical use of software which cover the administration, documentation and evaluation of the entire therapy process, including a database with pictures/videos about exercises which can be adapted individually by the therapists. Another objective was to show what conditions for a successful implementation of advanced applications during the entire treatment process have to be fulfilled. The approach of mixed-methods designs was chosen. In the first part a two-stage qualitative study was carried out, followed by a quantitative survey. The results show that the use of the software regarding the therapy-related part is dependent on how adaptable the software is to the special needs of the therapists, that the whole treatment process is mapped on the software and that an additional training during the professional practice must be implemented in order to deploy the use of the software successfully in the therapeutic process.
Background: Much of the information on the complementary medicine is spread across literature and the internet. However, various literature and web resources provide information just of one specialist field. In addition, these resources do not allow users to search for suitable therapies based on patient-specific indications.
Objectives: Aggregating knowledge about complementary medicine into one database makes the search more efficient.
Methods: Data integration is a promising method for providing well-based knowledge. Therefore, integrative methods were used to create the database ALTMEDA, which includes complementary and drug-related data.
Results: Based on this comprehensive database ALTMEDA, the new eHealth system KATIS and the corresponding app ALMEKO for the mobile usage were implemented.
Conclusion: KATIS is a web-based system for complementary medicine. KATIS provides knowledge about ten different specialist fields, which enables users not only to look up a particular complementary therapy, but also to find suitable therapies for indications more efficiently. [http://www.komplementäre-medizin.de]
Background: Accurate and consistent death certification facilitates morbidity and mortality surveillance, and consequently supports evidence-informed health policies.
Objectives: The paper initially explores the current death certification practice in Slovenia, and identifies related deficiencies and system inconsistencies. Finally, the paper outlines a conceptualization of ICT-based model of death certification including renovation of business processes and organizational changes.
Methods: The research is based on focus group methodology. Structured discussions were conducted with 29 experts from cross-sectional areas related to death certification.
Results: Research results imply that effective ICT-based transformation of the existing death certification model should involve a redefinition of functions and relationships between the main actors, as well as a reconfiguration of the technological, organizational, and regulatory elements in the field.
Conclusion: The paper provides an insight into the complexities of the death certification and may provide the groundwork for ICT-based transformation of the death certification model in Slovenia.