Abstract
The results of this study increased insight into factors that explain differences in the judgments of IGZ inspectors. This study shows that IGZ inspectors systematically differ in the regulatory judgments they assign to similar health care institutions. This study also shows that IGZ inspectors systematically tend to assign judgments that are too positive compared with the IGZ corporate standards. In this study it was examined whether the reliability and validity of regulatory judgments varied between two types of instruments: a lightly structured instrument (LSI) used for the regulation of hospital care was compared with the highly structured instrument (HSI) used for the regulation of nursing home care. The results indicated problems with the reliability and validity of the judgments assigned with the HSI; however, reliability and validity could not be calculated with the LSI. The results showed that using an HSI is preferable to using an LSI.
To analyze the interventions professionals carry out to improve reliability, a systematic meta-analytic review of the research literature was performed. Three types of interventions could be defined: improving the diagnostic instrument, training the professional, and a combination of both. On average, although all types of interventions are effective, improving the diagnostic instrument seems to be the most effective; especially in the case of highly technical instruments, improvement has proven to be very effective. Because instrumental variables constitute a major source of error, improving the instrument is an important approach. However, this review offers solid arguments that can complement the literature and practice, with a focus on training the user of the instrument.
An experimental study was performed to determine what kind of intervention would be effective for improving the reliability and validity of the regulatory judgments of IGZ inspectors. A case study was set up to examine the effect of participating in a consensus meeting and the effect of improving the regulatory instrument.
The results showed that when an HSI was used, participating in a consensus meeting improved both the reliability and the validity of the regulatory judgments. Adjusting this instrument influenced but did not improve the reliability and validity of the judgments. This means that changing the instrument without training the inspectors in the use of the adjusted instrument does not improve the reliability and validity of the judgments. These outcomes emphasize the importance of the human factor in explaining variance between inspectors, and highlight the significance of training inspectors in the use of regulatory instruments. The effect of increasing the number of inspectors per case was calculated. This increased the reliability and validity of the regulatory judgments.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 9 Sept 2014 |
Publisher | |
Print ISBNs | 978-94-6259-145-5 |
Publication status | Published - 9 Sept 2014 |