

Selected evaluation studies (n= 108) of automated information systems in the diffusion phase of their life cycle are classified according to the following features: type of automated information system, type of study design, type of effect measure, type of data collection methods and type of economic analysis. Results show that that certain types of automated information systems (i.e. auxiliary systems) have not much been evaluated. Furthermore for certain study designs (e.g. time series design), data collection methods (eg. modelling and simulation) and effect measures (eg. jobsatisfaction) it is observed that these are hardly used. The study demonstrates a relation between the type of automated information system and the effect measured. The evaluation of diagnostic systems is mostly aimed at process measures (e.g. the performance of the user), whereas for the evaluation of treatment systems with outcome measures (e.g. mortality, Qaly) are clearly preferred. The evaluation of nursing and supporting systems with mainly focusses on the effects on structure measures (e.g. time consumption of personnel). There seems to be a trend towards the use of weaker study designs, mainly for the evaluation of supporting systems and nursing systems. The performance of strong study designs does ofcourse not guarantee a high quality of the evaluation study. As the analysis shows most of the studies adressed less than 60% of attributes of a quality checklist, indicating that there is no reason to have great confidence in the results of the stronger designs. We were surprised to find only 6 studies out of 108 that carried out an economic analysis alongside a controlled clinical trial. Also the quality of the economic analysis performed reveals no great confidence.