Created by W.Langdon from gp-bibliography.bib Revision:1.8178
In the past, most diagnostic accuracy studies followed a univariable or single test approach with the aim to quantify the sensitivity, specificity or likelihood ratio. However, single test studies and measures do not reflect a test's added value. It is not the singular association between a particular test result or predictor and the diagnostic outcome that is informative, but the test's value independent of diagnostic information. Multivariable modelling is necessary to estimate the value of a particular test conditional on other test results. However, diagnostic prediction rules are not the solution to everything. They have certain drawbacks, such as overoptimistic accuracy when applied to new patients. Recently, methods have been described to overcome some of these drawbacks. Typically, in diagnostic research one selects a cohort of patients with an indication for the diagnostic procedure at interest as defined by the patients' suspicion of having the disease of interest. The data are analysed cross-sectionally. When appropriate analyses are applied, results from nested case-control studies should be virtually identical to results based on a full cohort analysis. We showed that the nested case-control design offers investigators a valid and efficient alternative for a full cohort approach in diagnostic research. This may be particularly important when the results of the test under study are costly or difficult to collect.
It is suggested that randomised controlled trials deliver the highest level of evidence to answer research questions. The paradigm of a randomised study design has also been applied to diagnostic research. We described that a randomised study design is not always necessary to evaluate the value of a diagnostic test to change patient outcome. A test's effect on patient outcome can be inferred and indeed considered as quantified -using decision analysis- 1) if the test is meant to include or exclude a disease for which an established reference is available, 2) if a cross-sectional accuracy study has shown the test's ability to adequately detect the presence or absence of that disease based on the reference, and finally 3) if proper, randomised therapeutic studies have provided evidence on efficacy of the optimal management of this disease. In such instances diagnostic research does not require an additional randomised comparison between two (or more) 'test-treatment strategies' (one with and one without the test under study) to establish the test's effect on patient outcome. Accordingly, diagnostic research -including the quantification of the effects of diagnostic testing on patient outcome- may be executed more efficiently.
Diagnostic research aims to quantify a test's added contribution given other diagnostic information available to the physician in determining the presence or absence of a particular disease. Commonly, diagnostic prediction rules use dichotomous logistic regression analysis to predict the presence or absence of a disease. We showed that genetic programming and polytomous modelling are promising alternatives for the conventional dichotomous logistic regression analysis to develop diagnostic prediction rules. The main advantage of genetic programming is the possibility to create more flexible models with better discrimination. This is especially important in large data sets in which complex interactions between predictors and outcomes may be present.",
We also showed that the development of a diagnostic prediction rule is not the end of the 'research line', even when a rule is subsequently adjusted for optimism using internal validation techniques e.g. bootstrap techniques. External validation of such rules in new patients is always required before introducing a rule in daily practice. This indicates that internal validation of prediction models may not be sufficient and indicative for the model's performance in future patients. Rather than viewing a validation data set as a separate study to estimate an existing rule's performance, validation data may be combined with data of previous derivation studies to generate more robust prediction models using recently suggested methods.",
* Contents
* Chapter 1: Introduction
* Chapter 2: Test research versus diagnostic research
* Chapter 3: Distraction from randomisation in diagnostic research
* Chapter 4: Reappraisal of the nested case-control design in diagnostic research: updating the STARD guideline
* Chapter 5: Validating and updating a prediction rule for neurological sequelae after childhood bacterial meningitis
* Chapter 6: Genetic programming or multivariable logistic regression in diagnostic research
* Chapter 7: Revisiting polytomous regression for diagnostic studies
* Chapter 8: Concluding remarks
* Summary
* Samenvatting
* Dankwoord
* Curriculum Vitae
* Volledig proefschrift (520 kB)
OMEGA KiQ Ltd.",
Genetic Programming entries for Cornelis Jan Biesheuvel