Boosting

Last revised by Andrew Murphy on 2 Aug 2021

Boosting is an ensemble technique that creates increasingly complex algorithms from building blocks of relatively simple decision rules for binary classification tasks. This is achieved by sequentially training new models (or 'weak' learners) which focus on examples that were classified incorrectly by previous classifiers. These weak learners are then converted into a strong learner by taking a weighted vote of the decisions made by the weak learners.

This method is strongest when there is a minimal correlation between each of the component weak learners – that is, that the errors of the weak learners occur in different circumstances. This is achieved by sequentially training new learners with an increased penalty for misclassifying those cases which were incorrectly classified by previous learners.

Boosting in radiology

Suppose there are three algorithms designed to detect consolidation on chest x-ray, A, B and C.  Let algorithm A be accurate except for when the radiograph is over-exposed, algorithm B be accurate except for when the patient is rotated, and algorithm C misclassify all atelectasis as consolidation. A simple model which uses a majority vote of these component models would be more accurate than each of algorithm A, B and C in isolation. For example, if the film is over-exposed and algorithm A misclassifies the chest x-ray, the ultimate decision will be outvoted by algorithms B and C which will both vote for the correct diagnosis.

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.