TY - UNPB
T1 - Progressive Growing of Patch Size
T2 - Resource-Efficient Curriculum Learning for Dense Prediction Tasks
AU - Fischer, Stefan M.
AU - Felsner, Lina
AU - Osuala, Richard
AU - Kiechle, Johannes
AU - Lang, Daniel M.
AU - Peeken, Jan C.
AU - Schnabel, Julia A.
N1 - Accepted at MICCAI2024; Changes for Camera-Ready-Version for MICCAI2024 (missing in this arxiv submission): Replaced T-Test with Wilcoxon Signed Ranked Test, as DSC samples are not normally distributed => now only significant improvements and no significant decreases in performance for PGPS/PGPS+
PY - 2024/7/10
Y1 - 2024/7/10
N2 - In this work, we introduce Progressive Growing of Patch Size, a resource-efficient implicit curriculum learning approach for dense prediction tasks. Our curriculum approach is defined by growing the patch size during model training, which gradually increases the task's difficulty. We integrated our curriculum into the nnU-Net framework and evaluated the methodology on all 10 tasks of the Medical Segmentation Decathlon. With our approach, we are able to substantially reduce runtime, computational costs, and CO2 emissions of network training compared to classical constant patch size training. In our experiments, the curriculum approach resulted in improved convergence. We are able to outperform standard nnU-Net training, which is trained with constant patch size, in terms of Dice Score on 7 out of 10 MSD tasks while only spending roughly 50% of the original training runtime. To the best of our knowledge, our Progressive Growing of Patch Size is the first successful employment of a sample-length curriculum in the form of patch size in the field of computer vision. Our code is publicly available at https://github.com/compai-lab/2024-miccai-fischer.
AB - In this work, we introduce Progressive Growing of Patch Size, a resource-efficient implicit curriculum learning approach for dense prediction tasks. Our curriculum approach is defined by growing the patch size during model training, which gradually increases the task's difficulty. We integrated our curriculum into the nnU-Net framework and evaluated the methodology on all 10 tasks of the Medical Segmentation Decathlon. With our approach, we are able to substantially reduce runtime, computational costs, and CO2 emissions of network training compared to classical constant patch size training. In our experiments, the curriculum approach resulted in improved convergence. We are able to outperform standard nnU-Net training, which is trained with constant patch size, in terms of Dice Score on 7 out of 10 MSD tasks while only spending roughly 50% of the original training runtime. To the best of our knowledge, our Progressive Growing of Patch Size is the first successful employment of a sample-length curriculum in the form of patch size in the field of computer vision. Our code is publicly available at https://github.com/compai-lab/2024-miccai-fischer.
KW - cs.CV
M3 - Preprint
BT - Progressive Growing of Patch Size
ER -