Dr.-Ing. Simon Reiß

  • Adenauerring 10
    Building 50.28
    76131 Karlsruhe, Germany

About me

Hello there, my name is Simon, I work as a research assistant here at the Computer Vision for Human Computer Interaction Lab.

I am excited by democratizising machine learning and making it's benefits accessible to endeavours of all sizes.
In my research, I aim at breaking down the obstacles when deploying vision technology by designing cost-efficient, economical methods while upholding excellent performance.
I develop computer vision algorithms that leverage small and inexpensive data to solve semantic segmentation tasks.

If you as a passionate student are interested in bringing streamlined semantic segmentation systems to all developers regardless of how niche the field of application, shoot me an e-mail and come work with me (see below for open thesis topics).

Topics that interest me most include but are not limited to, semi-supervised learning, weakly-supervised learning, self-supervised learning, medical image segmentation as well as unsupervised image segmentation.

 

 

Master’s Thesis on Assessing and Improving in-Context Learners on Out-of-Domain Datasets

  • Subject:Computer Vision, in-Context Learning
  • Type:Master Thesis
  • Tutor:

    Simon Reiß

  • Vision models that are capable to, given a „visual description“ of a task (prompt), solve this task for a new incomming image without being explicitly re-trained on this task are so called in-Context Learners.

    In this master’s thesis, I would like to explore together with you what the limits of such models are and how their capabilities really transfer to data from different imaging domains, that they were not explicitly trained on. This evaluation might include microscopy imagery, satellite images, medical images and more as well as a broad variety of tasks in these domains. Based on the observed transfer capabilities, adaptation strategies for training in-Context Learners will be explored to address identified shortcommings.

    Topic description: PDF

Publications & Collaborations


360BEV: Panoramic Semantic Mapping for Indoor Bird’s-Eye View
Teng, Z.; Zhang, J.; Yang, K.; Peng, K.; Shi, H.; Reiß, S.; Cao, K.; Stiefelhagen, R.
2024
360BEV: Panoramic Semantic Mapping for Indoor Bird’s-Eye View
Teng, Z.; Zhang, J.; Yang, K.; Peng, K.; Shi, H.; Reiß, S.; Cao, K.; Stiefelhagen, R.
2024. IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, 4th-8th January 2024
Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Seibold, C.; Reiß, S.; Sarfraz, S.; Fink, M. A.; Mayer, V.; Sellner, J.; Kim, M. S.; Maier-Hein, K. H.; Kleesiek, J.; Stiefelhagen, R.
2022. 33rd British Machine Vision Conference Proceedings, BMVC 2022, British Machine Vision Association, BMVA
Delivering Arbitrary-Modal Semantic Segmentation
Zhang, J.; Liu, R.; Shi, H.; Yang, K.; Reiß, S.; Peng, K.; Fu, H.; Wang, K.; Stiefelhagen, R.
2023. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, Kanada, 18-22 June 2023, 1136 – 1147, Institute of Electrical and Electronics Engineers (IEEE). doi:10.1109/CVPR52729.2023.00116
Decoupled Semantic Prototypes enable learning from diverse annotation types for semi-weakly segmentation in expert-driven domains
Reiß, S.; Seibold, C.; Freytag, A.; Rodner, E.; Stiefelhagen, R.
2023. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15495–15506
Capturing omni-range context for omnidirectional segmentation
Yang, K.; Zhang, J.; Reiß, S.; Hu, X.; Stiefelhagen, R.
2021. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1376–1386, Institute of Electrical and Electronics Engineers (IEEE). doi:10.1109/CVPR46437.2021.00143
Graph-Constrained Contrastive Regularization for Semi-weakly Volumetric Segmentation
Reiß, S.; Seibold, C.; Freytag, A.; Rodner, E.; Stiefelhagen, R.
2022. Computer Vision – ECCV 2022 – 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXI. Ed.: S. Avidan, 401–419, Springer Nature Switzerland. doi:10.1007/978-3-031-19803-8_24
Breaking with Fixed Set Pathology Recognition Through Report-Guided Contrastive Training
Seibold, C.; Reiß, S.; Sarfraz, M. S.; Stiefelhagen, R.; Kleesiek, J.
2022. Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 – 25th International Conference, Singapore, September 18–22, 2022, Proceedings, Part V. Ed.: L. Wang, 690–700, Springer International Publishing. doi:10.1007/978-3-031-16443-9_66
Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training
Seibold, C.; Reiß, S.; Sarfraz, M. S.; Stiefelhagen, R.; Kleesiek, J.
2022. doi:10.5445/IR/1000146800
Reference-guided Pseudo-Label Generation for Medical Semantic Segmentation
Seibold, C.; Reiß, S.; Kleesiek, J.; Stiefelhagen, R.
2022. Thirty-sixth AAAI conference on artificial intelligence. Online, 22.02.2022 - 01.03.2022, 2171–2179, Association for the Advancement of Artificial Intelligence (AAAI)
Deep Classification-driven Domain Adaptation for Cross-Modal Driver Behavior Recognition
Reiß, S.; Roitberg, A.; Haurilet, M.; Stiefelhagen, R.
2020. 31st IEEE Intelligent Vehicles Symposium, IV 2020, Virtual, Las Vegas, United States, 19 October 2020 through 13 November 2020, 1042–1047, Institute of Electrical and Electronics Engineers (IEEE). doi:10.1109/IV47402.2020.9304782
Activity-aware attributes for zero-shot driver behavior recognition
Reiß, S.; Roitberg, A.; Haurilet, M.; Stiefelhagen, R.
2020. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2020; Virtual, Online; United States; 14 June 2020 through 19 June 2020, 3950–3955, Institute of Electrical and Electronics Engineers (IEEE). doi:10.1109/CVPRW50498.2020.00459
Drive&Act: A Multi-modal Dataset for Fine-grained Driver Behavior Recognition in Autonomous Vehicles
Martin, M.; Roitberg, A.; Haurilet, M.; Horne, M.; Reiß, S.; Voit, M.; Stiefelhagen, R.
2019. IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 27 Oct.-2 Nov. 2019, 2801–2810, Institute of Electrical and Electronics Engineers (IEEE). doi:10.1109/ICCV.2019.00289

Media