
Ebook: Medical Infobahn for Europe

The challenge of this century is to develop and introduce information technology for the improvement of quality and efficiency of health care. Telematics applications will focus on the patient and support evidence-based health informatics, will provide generic medical information for health care professionals and citizens, and support disease management as well as case management. Biometry and Epidemiology continuously contribute to the progress in medicine by focussing on risk assessment, prevention, improvements in therapy, drug surveillance and safety, and by providing a rational synthesis for evidence. Presentations, demonstrations, tutorials and other contributions of Medical Informatics as well as the methodological contributions of Biometry and Epidemiology are covering an important number of topics.
The German Association for Medical Informatics, Biometry and Epidemiology (GMDS) hosts MIE2000, the 16th Medical Informatics Europe Congress, and GMDS2000, its 45th annual congress. It is a contribution to the World Exhibition EXPO2000 that takes place in Hannover from June to October 2000.
Medical Infobahn for Europe was chosen as motto for this combined conference. It reflects the growing role of communication in health care. The challenge of this century is to develop and introduce information technology for the improvement of quality and efficiency of health care. Telematics applications will focus on the patient and support evidence-based health informatics, will provide generic medical information for health care professionals and citizens, and support disease management as well as case management. Biometry and Epidemiology continuously contribute to the progress in medicine by focussing on risk assessment, prevention, improvements in therapy, drug surveillance and safety, and by providing a rational synthesis for evidence. Presentations, demonstrations, tutorials and other contributions of Medical Informatics as well as the methodological contributions of Biometry and Epidemiology are covering an important number of topics. Tutorials are providing background knowledge for the scientific presentations and the contents of discussions during workshops will be used for producing reports and recommendations concerning the future use of information technology in health care. Additional material will be handed out to the participants during the conference (paper copies and the MIE2000/GMDS2000 CD-ROM).
Because of the combination of the two congresses two Scientific Programme Committees were active. The chairs of both committees worked together very closely. After a reviewing process (in which each paper was reviewed by three reviewers) a programme resulted, consisting of oral presentations (so-called long oral presentations), combined oral and poster presentations (so-called short oral presentations) and pure poster presentations. The long and short oral presentations are published in these Proceedings. The poster presentations will become available on the CD-ROM.
Many relevant topics are covered in the Proceedings. Because of the combination with the 45th annual GMDS congress also presentations from the disciplines Biometry and Epidemiology are included. Furthermore, a number of papers are in German, reflecting the fact that they concern contributions that will be presented at the annual GMDS congress.
The editors of the Proceedings want to thank all authors for submitting papers; reviewers and programme committee members for judging the papers; Hannelore Wimmer, Marianne Scammell and Marié Vanherle for their support in the preparation and the production of the Proceedings.
Arie Hasman
Bernd Blobel
Joachim Dudeck
Rolf Engelbrecht
Günther Gell
Hans-Ulrich Prokosch
MedView is a joint project with participants from oral medicine and computer science. The aim of the project is to build a large database from patient examinations and produce computerised tools to extend, view, and analyse the contents of the database. The contents of the data base is based on a formalisation of health-care processes and clinical knowledge in oral medicine harmonised within the network SOMNET. We give an overview of the current status of the MedView project and discuss background and future directions.
An information visualisation tool was implemented and tested as a solu- tion to the problem of visualising clinical experience derived from large amounts of formalised clinical data. The tool was based on the idea of dynamic 3D parallel diagrams with support for direct manipulation, an idea similar to the notion of 3D parallel coordinates. The tool was tested on a knowledge base containing about 1500 examinations obtained from different clinics. Clinical practice showed that the basic idea is conceptually appealing to the involved clinicians as the tool can be used for generating and testing of hypothesis.
The application of the principles of evidence-based medicine is based on a rigorous analysis of clinical research tailored to the individual characteristics of a specific patient. Thus the physician must have information about the patient and about the latest results of clinical research simultaneously. We present a computer-based clinical workstation offering sources for both kinds of information: the current patient data is delivered by access to the electronic patient record and the results of research may be searched at the Internet, Intranet, or other online databases. Performing an evaluation study we are testing this configuration in the setting of a university hospital. The physicians are interviewed about their use of the single information sources and their consequences for clinical research and evidence-based medicine. We hope to show that in spite of some technical and methodological problems the clinical workstation is a promising tool for decision-making at the clinical workplace.
This investigation aimed to compare survival rates in paediatric and adult Hodgkin's disease using published results. When comparing results obtained in different studies and institutions it is important to explain, estimate and allow for heterogeneity between studies. This was attempted though systematic inclusion of a large number of published results, through modelling the influence of covariates on survival using a generalised linear model, and by estimation of both the sampling errors in the extracted survival rates and the heterogeneity between studies. A significant superiority of treatment results in paediatric institutions compared with adult institutions was demonstrated, allowing for differences in patient and treatment characteristics.
In Europe, North America and elsewhere, growing interest has focussed on evidence-based healthcare systems, incorporating the deployment of practice guidelines, as a field of application for health telematics. The clinical benefit and technical feasibility of common European approaches to this task has recently been demonstrated. In Europe it is likely that, building on recent progress in electronic health record architecture (EHRA) standards, a sufficient state of maturity can be reached to justify initiation within CEN TC251 of a pre-standards process on guideline content formats during the current 5th Framework of EC RT&D activity. There is now a similar impetus to agree standards for this field in North America. Thanks to fruitful EC-USA contacts during the 4th Framework programme, there is now a chance, given well-planned coordination, to establish a global consensus optimally suited to serve the world-wide delivery and application of evidence-based medicine. This review notes three factors which may accelerate progress to convergence:
(1) revolutionary changes in the knowledge basis of professional/patient/public healthcare partnerships, involving the key role of the Web as a health knowledge resource for citizens, and a rapidly growing market for personalised health information and advice;
(2) the emergence at national levels of digital warehouses of clinical guidelines and EBM knowledge resources, agencies which are capable of brokering common mark-up and interchange media definitions between knowledge providers, industry and healthcare organizations;
(3) the closing gap in knowledge management technology, with the advent of XML and RDF, between approaches and services based respectively on text mark-up and knowledge-base paradigms.
A current project in the UK National Health Service (the National electronic Library of Health) is cited as an example of a national initiative designed to harness these trends.
The number needed to treat has gained much attention in the past years as a useful way of reporting the results of randomised controlled trials with a binary outcome. Defined as reciprocal of the absolute risk reduction the number needed to treat is the estimated number of patients who need to be treated to prevent an adverse outcome in one additional patient. As with other estimated effect measures, it is important to document the uncertainty of the estimation by means of an appropriate confidence interval. Confidence intervals for the number needed to treat can be obtained by inverting and exchanging the confidence limits for the absolute risk reduction. Unfortunately, the only method used in practice for calculating a confidence interval for the absolute risk reduction seems to be the usual asymptotic method, which yields confidence intervals which are too short in many cases. In this paper it is shown that the application of the Wilson score method improves the calculation and presentation of confidence intervals for the number needed to treat.
Concerning the actual significance level, we investigate the commonly used combined test procedures of meta-analysis, which contain the choice of the model, in which the analysis is carried out, and the commonly used tests for treatment effect in the fixed and random effects model, and some new combined test procedures, which use an alternative test statistic in the random effects model or the t-distribution as test distribution of the commonly used test statistics. A simulation study indicates that the new combined test procedures are better with respect to a prescribed significance level.
Different estimation methods for covariance parameters in meta-analyses can result in conflicting p-values concerning the test of treatment effect. We propose a valid method to overcome this problem at least partially by introducing a new estimator for the standard deviation of the common treatment difference.
Preceding the implementation of a Stroke Unit (SU), data have been collected and used for building a simulation model of patient flow. This model was subsequently used to estimate the optimal capacity of the SU to be implemented. Because stroke patients require acute hospital care, this implies a highly variable number of immediate admissions. This variability complicates optimizing the capacity. In order to support decisions with regard to staffing (i.e. capacity) of the SU, different scenarios are simulated and compared to provide insight in the trade-off between regular understaffmg and a low bed occupancy rate. In 1996 the Department of Neurology of the Academic Medical Center in the Netherlands implemented its SU to improve the quality of care for stroke patients. Data collected in the years 1997 and 1998 that the SU has been operational were evaluated and confirm the predictions made from simulating different scenarios. We conclude that simulation models provide a powerful tool for supporting decision making with regard to resource planning at the departmental level in our hospital.
Hospital information systems have to support quality improvement objectives. The design issues of health care information system can be classified into three categories: 1) time-oriented and event-labelled storage of patient data; 2) contextual support of decision-making; 3) capabilities for modular upgrading. The elicitation of the requirements has to meet users' needs in relation to both the quality (efficacy, safety) and the monitoring of all health care activities (traceability). Information analysts need methods to conceptualise clinical information systems that provide actors with individual benefits and guide behavioural changes. A methodology is proposed to elicit and structure users' requirements using a process-oriented analysis, and it is applied to the field of blood transfusion. An object-oriented data model of a process has been defined in order to identify its main components : activity, sub-process, resources, constrains, guidelines, parameters and indicators. Although some aspects of activity, such as “where”, “what else”, and “why” are poorly represented by the data model alone, this method of requirement elicitation fits the dynamic of data input for the process to be traced. A hierarchical representation of hospital activities has to be found for this approach to be generalised within the organisation, for the processes to be interrelated, and for their characteristics to be shared.
We would like to introduce several aspects of the analysis and modeling of the treatment process characterizing the cooperation within multi-professional treatment teams. We will determine what is meant by a treatment process in order to then look at five views and four levels of their description. We will introduce possible methods for surveying and describing it. Currently an extensive analysis of the current state of the treatment process and of the weaknesses is underway in the Department of Child and Adolescent Psychiatry of the Heidelberg University Medical Center.
Web-based drug ordering allows a growing number of hospitals without pharmacy to communicate seamlessly with their external pharmacy. Business process analysis and object oriented modelling performed together with the users at a pilot hospital resulted in a comprehensive picture of the user and business requirements for electronic drug ordering. The user requirements were further validated with the help of a software prototype. In order to capture the needs of a large number of users CAP10, a new method making use of pre-built models, is proposed. Solutions for coping with the technical requirements (interfacing the business software at the pharmacy) and with the legal requirements (signing the orders) are presented.
According to the experimental observations, contour integration should occur in the primary visual area (VI). We propose a computational model of contour perception in human visual cortex using a recurrent neural network. Our approach is based on the neuro-physiological findings about the columns of ocular dominance and orientation in VI, and on some other existing mathematical models. We also developed a computer application for implementing and testing the proposed model.
In welchem Umfang Knoblauch arterielle Fettablagerungen reduziert, haben Koscielny et al. (1999) in einer randomisierten, placebo-kontrollierten, doppelblinden Prüning untersucht. Das Plaque-Volumen wurde bei Einschluß in die Studie und nach 18,36 und 48 Monaten in beiden Femoral- und Carotis-Arterien mit Hilfe eines Ultraschallverfahrens gemessen und summiert. Je 140 Patienten mit Risikofaktoren für Atherosklerose und nachgewiesenen Ablagerungen wurden einem der beiden Behandlungsarme randomisiert zugewiesen. Die von den Autoren durchgeführte Auswertung ist problematisch. Sie weist im wesentlichen zwei schwerwiegende Mängel auf. Zum einen werden wiederholte Beobachtungen an einem Patienten als unabhängig angesehern, was zu einer Überbewertung der tatsächlich vorhandenen Information führt und damit zu antikonservativen Tests (außerdem wurden einige p-Werte falsch berechnet). Zum anderen werden Patienten, die die Studie vorzeitig beendet haben (128 von 280), nicht in die Auswertung einbezogen. Die Auswertung wird im Lichte internationaler Richtlinien diskutiert und Alternativen werden aufgezeigt.
Die Modellauswahl durch absteigendes Vorgehen bei longitudinalen Daten wird an einem Beispiel illustriert. Dabei wird auf die Wahl der festen Effekte und der Kovarianzstruktur eingegangen. Mit WinBUGS werden solche und robustere Modelle geschätzt.
Gibbs sampling is a technique to calculate a complex posterior distribution as steady state measure of a Markov chain. The fundamental problem of inference from Markov chain simulation is that there will always be areas of the target distribution that have not been covered by the finite chain. Deciding when to stop the chain in order to have reached enough coverage of the support of the target distribution is an important matter. Techniques based on one long single chain and on multiple chains are discussed in the framework of a linear mixed effects model. The diagnostics used do not provide a consistent view on the convergence. Practical consequences on the estimates are shown.
Essential for a valid analysis of treatment efficacy is a prospectively specified analysis, that is robust against violation of those assumptions, which cannot be ascertained a priori. Additional modelling for inference of the treatment effect makes sense, if it is intended to check validity and efficiency of the preplanned analysis and the influence of missing data and protocoll violations on the results. Secondly, the temporal course of therapeutic effect can be investigated and thirtly, a subgroup analysis may be done in order to search for variables that modify the therapeutic effect.
Unfortunately, the statistical modelling process is not easily planned prospectively. Hence for a balanced judgement it is recommended, to list all models eventually fitted, and weight them subjectively with respect to deemed validity and efficiency. The additional modelling should result in a statement about the preplanned analysis as to the validity of the effect estimate and and the efficiency of the procedure producing the corresponding p-value or confidence interval.
Using data of a study by Koscielny & al. on the efficacy of garlic on longterm reduction of artherosclerosis all three points are adressed by the way of Bayesian analysis of linear mixed models. Results are compared to those obtained by restricted maximum likelihood method.
It can be illustrated that posterior based tests and confidence intervals that belong to uninformative priors, are frequently very similar to classic confidence intervals and p-values. Hence they are interpretable within the categories of classical inferential statistics.
Where REML-Analysis is valid asymptotically only, it can be validated via Bayesian analysis. Further advantages are easy and robust way of handling missing values, a large collection of tractable models and the use of posterior densities as an intuitive add on to confidence intervals.
Eine Zielsetzung des Integrationsprogramms Arbeit und Gesundheit von Unfallversicherung und Krankenkassen (IPAG) besteht in der Prüfung, ob durch Einbeziehung von Sekundärdaten der Sozialversicherungsträger - über die Arbeitsunfähigkeitsdaten hinaus - die Grundlage fur das Erkennen arbeitsbedingter Gesundheitsgefahren verbessert werden kann. Es wurden daher die Daten über Arzneimittelverordnungen für ein Beschäftigtenkollektiv selektiert und mit Informationen über die Arbeitsbedingungen verknüpft. Erste Auswertungen lassen erkennen, daß die Häufigkeit der Arzneimittelverordnungen sowie die jeweiligen Anteile ausgewählter Indikationsgruppen am Verordnungsgeschehen im Vergleich der Arbeitsplatztypen unterschiedlich sind und somit möglicherweise die Einflüsse unterschiedlicher Belastungsfaktoren abbilden. Darüber hinaus enthalten die Verordnungsdaten offenbar Morbiditätsinformationen, die aus den Arbeitsunfühigkeitsdaten nicht gewonnen werden können.
In study planning for large-scale epidemiological studies with multi-type examinations, important resource questions have to be answered, e.g. study horizons, personnel and other resources and cost estimates are needed. A simulation approach for scheduling that uses relevant parameters to simulate examination times gives simulation distributions that can be utilised to answer planning questions.