When information technologies began to be more widely applied in medicine, it became clear that it was evidence of effectiveness which would provide the most important incentive for its wider diffusion within the healthcare market.
This book is the principal end product of the ATIM (Assessment of information Technology In Medicine) project, which was carried out between April1993 and December 1994 as part of AIM, the Advanced Informatics in Medicine element of the ‘Telematics Systems in Areas of General Interest’ program, an initiative of the European Union, and was aimed at reaching a consensus on methods and criteria for the evaluation and assessment of information technologies for and in healthcare.
This ATM ‘handbook’ provides first guidelines helpful to users in assessing the effects of new and existing information technologies for healthcare, as well as in interpreting assessment reports. The book includes both methodological background information (Part A) with practical examples from various AIM projects (Part B), and is intended for those users of information technology who are involved in the evaluation and assessment process, but do not necessarily have a background in either social sciences or evaluation and assessment.
This volume is the main end product of ATIM (Assessment of information Technology In Medicine). ATIM is an Accompanying Measure in AIM; the advanced Informatics in Medicine part of the “Telematics Systems in Areas of General Interest” programme of DG XIII of the European Union under the 3rd Framework Programme. ATIM has been carried out between April 1993 and December 1994.
Evidence for effectiveness of information technology may provide an essential impulse for its diffusion in the health care market. However, at present results of studies addressing this issue seem hardly to provide this impulse. The fact that no agreement on methods and criteria for evaluation and assessment exists is seen as a major obstacle for diffusion. Studies tend to be ad-hoc and site-specific. ATIM was established to develop consensus on methods and criteria for the evaluation and assessment of information technologies for and in health care.
For pragmatic reasons ATIM focused on two project lines within AIM: one dealing with Medical Multimedia Workstations and Images, the other with Knowledge Based and Decision Support Systems. Participants in projects in these project lines discussed methods and exchanged experiences, not only during workshops under the ATIM umbrella, but also during AIM concertation meetings. In addition, ATIM provided consultancy and has been building a literature database and a network of experts. ATIM produced four newsletters.
ATIM activities were coordinated in a joint fashion by two institutes:
BAZIS, the Central Development and Support Group Hospital Information System, Leiden, The Netherlands and
The Department of Medical Informatics of the University of Limburg, Maastricht, The Netherlands
The Department of Medical Informatics had the responsibility for the activities in the action line related to Knowledge Based Systems; BAZIS had the responsibility for the Imaging action line as well as for the overall project coordination. In this coordination the ATIM leaders were supported by the ATIM Coordinating Committee: François Grémy (Montpellier, France), David Hawkes (London, United Kingdom), Gianpolo Molino (Torino, Italy), Niilo Saranummi (Tampere, Finland) and Wilhelm Van Eimeren (Munich, Germany).
This ATIM “Handbook” provides first guidelines which may help users in assessing the effects of new and existing information technologies for health care, as well as in interpreting assessment reports. More specifically, it aims to provide support in validation processes, which will take place in the Telematics Programme of the Fourth Framework Programme. The book combines methodological background information (Part A) with practical examples from various AIM projects (Part B). The book is addressed to the (potential) users of information technology; the people who are essential for the evaluation and assessment process, but not necessarily with background in social sciences or evaluation and assessment.
We would herewith like to acknowledge the enthusiasm and dedication of all people that contributed to ATIM. In particular we would like to thank Jacques Lacombe, the project officer of AIM for ATIM, for his never lasting support and for being ATIM's “ambassador”. Moreover we are indebted to the AIM projects that actively contributed to ATIM: COVIRA, DILEMMA, ESTEEM, ElURIPACS, HELIOS, GAMES-2, ISAR, KAVAS-2, MILORD, OEDIPE, OPADE, OPENLABS, SAMMIE, TELEGASTRO and TRILOGY.
Elisabeth van Gennip, Jan Talmon (Editors) October
This book presents the main end product of the first phase of ATIM: the Accompanying Measure on Assessment of information Technologies In Medicine in the AIM telematics programme. It aims to provide first guidelines for IT users and industry involved in validation of information technologies in health care. This introduction begins with background information concerning ATIM, why it was started and its activities between April 1993 - December 1994. Next a guided tour through the handbook is given to help readers in using this book.
This chapter tries to look into the reasons for delusion about efficiency and quality of results of Medical Informatics Technology. Our thesis is that it is not a technology like the other medical technologies, and that “to assess medical informatics” implies to assess not only the machinery (hard and soft), but above all to assess what medical informatics really do for people. One can hardly speak of medical informatics without having in mind questions related to human concerns, thus without referring to human sciences.
We then try to draw some of the consequences of such a philosophical approach for the design of Automatized Hospital Medical Information Systems, and we show an example issued from our own experience: a study aimed at showing and analyzing the gap between conceptors' and users' points of view about an AHMIS.
In our opinion there are many ambiguities when using the term “evaluation” or “assessment in different perspectives. Many projects of informatics in medicine or health care are on the supply or management side, while TA projects are mainly public needs oriented or on the users side. On the one side, the evaluation is predominantly thought in terms of technological feasibility or cost-benefit analysis. On the other, the assessment is done from the social and societal point of view. That definition of TA is relatively new, as until the end of the seventies, evaluation of technologies, when it took place, was conducted through impacts studies. The emergence of Constructive Technology Assessment (CTA), wherein the influence of the actors and the social processes on technology itself is recognised, resulted from the questions about the “scientist” model of TA and the necessity to unlock conflictual situations. Technologies are not considered as given and rigid objects any more: they become, as society was, subject to mutations.
The importance of quickly translating research into new products and improvements of existing ones is well recognized. Industry must be both efficient and effective. Consequently, demands for system development are growing. In developing systems for health care the challenge is to understand the problem space well enough to derive the right specifications for the right system. In the past it was enough to interview the actual users of the intended system to get the specifications. Nowadays, information technology (IT) is considered to be a strategic resource of the organisation, i.e. it can be used for more than just automating tasks and transferring data.
To use IT in health care effectively one needs to look beyond the first order requirements of those actually using the system. The whole context in which these systems are used needs to be understood to decide how they can be best utilised. Technology assessment is one concept that provides guidelines and tools on how to do this. This paper discusses the role of technology assessment in analysing the problem space.
Possible reasons for the relatively disappointing uptake of certain medical computer systems such as Decision Support Systems or Electronic Medical Record Systems are discussed.
It is suggested that major factors which influence successful medical computer systems – laboratory, radiology, pharmacy and some hospital information systems – are the strong user needs for the system and the application domain. It is proposed that medical computer systems should be built using an iterative life cycle development/evaluation methodology. The conception of the system being best handled by a rigorous evaluation of user needs by some formalised method of requirements engineering.
During the developement of all medical information systems evaluation should be considered both an essential and integral task.
A strategy for the evaluation of information systems comprises at least four different levels of analysis: Verfication, Validation, Human Factors Assessment and Clinical Assessment. This report concentrates on verification and validation issues during the development of medical information systems, especially knowledge based systems. A framework is proposed identifying different activities and different roles of developers and prospective users during system development.
To assess the validity of the knowledge base of a decision support system dynamically, one needs to apply the system to a number of cases. This contribution discusses issues related to the composition of the testing data base and the methods for establishing the reference against which the outcome of the decision support system has to be compared.
In case a working prototype exists, or even a product, effects of the information technology can be assessed “in real life”. Such an assessment in “real life” can be done with an experiment. This contribution describes how to design an experiment and how to prevent pitfalls, introduced by various causes of bias. In this respect internal validity and external validity are distinguished. It is concluded that the best study design depends on the focus and the boundary conditions in each individual setting. As a general rule it is recommended to consider all possible causes for bias, to eliminate or minimize bias where possible for example using specific study designs and statistics, or to take bias into account in interpreting results.
The introduction of new Information Technology in health care is – as in other fields – a complex process. Expected benefits have to be identified, and balanced against additional costs of such a system, in an investment analysis.
The introduction of a new information system invariably leads to changes in the organisation. This may be a simple conversion to electronic conununication or the complete restructuring of the organisational process. Cost accounting can be used here to compare the conventional process to the situation of after the information system is introduced.
In this chapter these two types of cost analysis are dealt with using the same modelling approach. This approach basically consists of three steps: the identification of cost components, the specification of actual costs, and the allocation of these costs.
Technology assessment of information technology imply comprehensive evaluation of consequences in technical, organizational, economic, and other domains. Which effect measures to choose depends on the type of technology and the objective of the assessment. This contribution illustrates by examples the variety of aspects to consider included.
Acquiring data on the variety of effects of information technology, the different domains and from a multiplicity of sources, necessitates the application of a number of methods. This contribution gives but a brief introduction to some common methods for acquiring and judging the validity of data, evolved within various sciences.
Technology assessment are intended to support decisions about future actions. These decisions are characterized by uncertainty conditions. This paper introduces a model to clarify the variable uncertainty conditions. In relation to technology assessment activities the use of the concepts efficacy and effectiveness have often been confused. The two concepts are here presented and discussed in relation to the systems theory framework: structure – processes – outcome. One method to reduce uncertainty situations from an economic viewpoint is to peiform a cost benefit/effectiveness analysis. Performing a cost benefit/effectiveness analysis can be protocolled in ten steps, and these steps are outlined. The true value of a cost benefit/effectiveness analysis lies more in the process of analysis than in the final result. There is no guarantee that benefits will be achieved and reports are often seen to be very technical. More emphasis should be put in reporting the results of cost benefit/effectiveness analysis.
The methodological grid we are developing here has to be considered as a proposal enlightened by some applied research in the field on information TA and, especially in the field of medical informatics. We propose here a general TA approach which borrows many features from more known TA methods. The core of our methodological proposal - which, as explained in chapter 2, cannot be dissociated from the CTA movement - is the controversies analysis. By developing this method, we are searching a better understanding of the real risks and advantages linked to a technology, but also the settlement of solid foundations in order to work out a future consensus around a new technology. This means that the paper will focus on the social construct of technologies. We have to see the problems not in terms of social adjustment, but rather in terms of taking into account the other dimensions of the innovation processes. Following the theoretical presentation of the controversies analysis method, we present a concrete case study concerning the development of Computerised Health Cards in Europe during the late eighties.
This chapter contains information and guidelines for scientists, researchers and medical professionals who intend to install a pilot in their ward, service or department within the framework of a R&D project co-funded by the European Union. It analysis requirements and cost-benefits in doing it.
This paper describes a well-assessed procedure for the validation of knowledge-based expert systems, as it results from recent authors' experience. The adopted strategy, including retrospective as well as prospective case samples and different groups of observers, fulfils the most important needs of expert system validation. The study describes an upgrading protocol based on the analysis of misclassification errors; a procedure for system verification; a detailed study design for laboratory validation considering diagnostic accuracy, diagnostic efficiency and advice analysis; and a section devoted to field testing. Relevance and limitations of the proposed evaluation protocol are discussed with respect to the complexity, inconveniences and thoroughness of the adapted procedure.
The goals of the TELEGASTRO-project are to develop a consensus view about specific areas of gastroenterology, to develop a multi-media package with this information and to distribute and evaluate the package. The evaluation procedure is monitored by an independent evaluation panel and has been specified in a detailed protocol. The evaluation covers the validation of the knowledge-base, piece by piece and as a whole, and the evaluation of the functionality and usability of the program. Evaluation achieved so far has shown clearly that the Knowledge base of the system is of high quality and accepted by clinical experts and that the usability and functionality is credited very high by different user groups. The next step is to evaluate the clinical impact.
G. Otte, S. Vingtoft, G. Sieben, B. Johnsen, A. Fuglsang-Frederiksen, J. Rønager, M. Veloso, P. Barahona, A. Vila, P. Fawcett, I. Scofield, J. Ladegaard, A. Talbot, R. Liguori, W. Nix, J.D. Guieu, M.S. Carvalho
179 - 187
From the prospective multi-center evaluation study of the knowledge based system (KBS) KANDID, pronounced variations were revealed or indicated particularly in local epidemiology, local examination techniques, local preferred examination protocols and local diagnostic criteria among seven European EMG laboratories. Due to this study a clinical network consisting of eight Electromyographic (EMG) centres has harmonised the used terminology and the interpretation process structure of EMG examinations. Based on these specifications the ESTEEM project has developed the EMG-Platform which provides a set of different applications to the clinical users of ESTEEM for EMG data acquisition, storage and interpretation locally and for the telecommunication between the clinical ESTEEM centres. Having such a framework established a medical audit and consensus process across Europe is now continuously ongoing on a daily basis. Within this emerging infrastructure it will in the near future be able to apply evaluation tools for the assessment of different EMG KBSs.
Jytte Brender, Jan Talmon, Pirkko Nykänen, Peter McNair, Michel Demeester, Régis Beuscart
189 - 208
More and more emphasis is put in the workplans of the EU to validate and assess information technologies. Several projects are established to show the validity of the prototypes developed in various AIM projects. ISAR is one of those projects, aiming at the integration of six AIM prototypes in the University Hospital environment in Lille, France. In this contribution we describe the evaluation methodology for integration projects that has been developed and is applied in the ISAR. We describe the methodology for the evaluation of KBS and the quality assessment framework – both developed in the KAVAS and KAVAS-2 projects of AIM – on which the evaluation methodology for integration processes was based. We also identify some practical aspects of assessment of the integration of individual prototypes as well as of the overall integration process.
Rudi Verbeeck, Johan Michiels, Bart Nuttin, Michael Knauth, Dirk Vandermeulen, Paul Suetens, Guy Marchal, Jan Gybels
209 - 222
This paper describes a protocol for the technical and clinical evaluation of a workstation for the planning of stereotactic neurosurgical interventions that is being developed in the framework of a joint European research project (COVIRA). Although several such workstations have been proposed before, the final and most important step, that of clinical validation, often lacked. The developers failed to rigorously prove that their product was useful.
The method presented here basically assesses the clinical relevance of the user requirements that are at the root of the development of the new technology. The evaluation consists of two stages. During functional specification iterative prototyping is used to establish the clinical requirements and to assure the quality of the final product. A case study design is used in a second stage that assesses the clinical acceptability. A before-after study during a 6~month period gives a first indication of cost effectiveness and improvement of health care quality.
The workstation will be evaluated in an academic hospital. The neurosurgeons have actively participated in the development process.
The objective of the project EurIpacs is “to create and implement a second generation PACS, proving that it can be applied widely in Europe”. This project involves 24 partners from all over Europe and covers a wide area of research, development and implementation of hard- and software in Hospitals to show the functionality and the feasibility of PACS in a clinical environment. One of the topics in EurIpacs is TEASS, which analyzes costs and benefits of prototype PACSystems in three different clinical settings in a pre-post measurement approach. The aim of TEASS is to produce guidelines for cost-effective implementation of PACS. Moreover TEASS has produced a PC-software package to support cost analysis. This paper describes the methodological approach which is developed in TEASS to assess costs and benefits of PACS. Example results for one of the three clinical sites (the Aachen University Hospital) are shown.
The basic idea from which we started was to test the possibility of improving the quality of health care by the use of a MILORD system. The framework and the set of specific indicators and methods for a clinical evaluation used in the MILORD project is described, both in terms of outcome and process indicators. The implication and the experience of the main testing site is reported. Finally a short discussion on the methodology of evaluation is provided, highlighting the importance of considering the activity of assessment with the same criteria for a scientific experimental activity.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 firstname.lastname@example.org
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 email@example.com