Kunyu Peng, M.Sc.

  • Vincenz-Priessnitz-Str. 3
    76131 Karlsruhe

Introduction

A journey to achieve more generalizable deep learning based video understanding and action recognition.

I received my B.Sc. degree in Automation from Beijing Institute of Technology (BIT), and received my M.Sc degree in Electrical Engineering and Information Technology from Karlsruhe Institute of Technology (KIT) in 2021. I am currently a Ph.D. candidate at the Computer Vision for Human-Computer Interaction Lab at KIT. My research fields mostly focus on generalizable human activity recognition and video understanding. Human activity recognition (HAR) plays a crucial role in enhancing human daily life by enabling technologies like healthcare monitoring, smart home automation, and assistive robotics. It helps in tracking and analyzing physical activities, allowing for personalized interventions, improving safety, and optimizing productivity. As the foundation of many advanced AI-driven systems, HAR is vital for creating seamless human-machine interactions that improve overall well-being and quality of life.

Therefore, developing generalized human activity recognition algorithms would greatly contribute to society. Students interested in activity recognition and video understanding are welcome to apply for Master’s and Bachelor’s thesis opportunities.

Google Scholar: https://scholar.google.com/citations?user=pA9c0YsAAAAJ&hl=zh-CN

 

Recent publications:

  1. Peng, K., Wen, D., Yang, K., Luo, A., Chen, Y., Fu, J., Sarfraz, M.S., Roitberg, A., & Stiefelhagen, R. (2024). Advancing Open-Set Domain Generalization Using Evidential Bi-Level Hardest Domain Scheduler. The Thirty-eighth Annual Conference on Neural Information Processing System, 2024. (NeuRIPS2024)
  2. Peng, K., Schneider, D., Roitberg, A., Yang, K., Zhang, J., Deng, C., Zhang, K., Sarfraz, M.S., & Stiefelhagen, R. (2023). Towards Activated Muscle Group Estimation in the Wild. ACM MULTIMEDIA 2024. (MM 24)
  3. Peng, K., Fu, J., Yang, K., Wen, D., Chen, Y., Liu, R., Zheng, J., Zhang, J., Sarfraz, M.S., Stiefelhagen, R., & Roitberg, A. (2024). Referring Atomic Video Action Recognition. The 18th European Conference on Computer Vision ECCV 2024(ECCV2024)
  4. Xu, Y., Peng, K., Wen, D., Liu, R., Zheng, J., Chen, Y., Zhang, J., Roitberg, A., Yang, K., & Stiefelhagen, R. (2024). Skeleton-Based Human Action Recognition with Noisy Labels. IEEE/RSJ International Conference on Intelligent Robots and Systems. (IROS2024) (Master Student Work)
  5. Peng, K., Yin, C., Zheng, J., Liu, R., Schneider, D., Zhang, J., Yang, K., Sarfraz, M. S., Stiefelhagen, R., & Roitberg, A. (2024). Navigating Open Set Scenarios for Skeleton-Based Action Recognition. Proceedings ofthe AAAI Conference on ArtificialIntelligence, 38(5), 4487-4496.
  6. Peng, K., Roitberg, A., Yang, K., Zhang, J.,& Stiefelhagen, R., "Delving Deep IntoOne-Shot Skeleton-Based Action Recognition With Diverse Occlusions," in IEEE Transactions on Multimedia, vol. 25, pp. 1489-1504, 2023, doi: 10.1109/TMM.2023.3235300.
  7. Xiao, H*., Peng, K*., Huang, X., Roitberg, A., Li, H., Wang, Z., & Stiefelhagen, R. (2023). Toward Privacy-Supporting Fall Detection via Deep Unsupervised RGB2Depth Adaptation. IEEE Sensors Journal, 23, 29143-29155.
  8. Peng, K., Roitberg, A., Schneider, D., Koulakis, M., Yang, K., & Stiefelhagen, R. (2021). Affect-DML: Context-Aware One-Shot Recognition of Human Affectusing Deep Metric Learning. 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), 1-8.
  9. Wei, Y., Peng, K., Roitberg, A., Zhang, J., Zheng, J., Liu, R., Chen, Y., Yang, K., & Stiefelhagen, R. (2024). Elevating Skeleton-Based Action Recognition withEfficient Multi-Modality Self-Supervision. ICASSP 2024. (Master Student Work)
  10. Peng, K., Roitberg, A., Yang, K., Zhang, J., & Stiefelhagen, R. (2022). TransDARC: Transformer-based Driver Activity Recognition with Latent Space Feature Calibration. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 278-285.
  11. Peng, K., Roitberg, A., Yang, K., Zhang, J., & Stiefelhagen, R. (2022). Should I take a walk? Estimating Energy Expenditurefrom Video Data. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2074-2084.
  12. Wei, P., Peng, K., Roitberg, A., Yang, K., Zhang, J., & Stiefelhagen, R. (2022). Multi-modal Depression Estimationbased on Sub-attentional Fusion. ECCV Workshops. (Bachelor Student Work)
  13. Tanama, C., Peng, K., Marinov, Z., Stiefelhagen, R., & Roitberg, A. (2023). Quantized Distillation: Optimizing Driver Activity Recognition Models for Resource-Constrained Environments. 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5479-5486. (Master Student Work)

  14. Roitberg, A., Peng, K., Marinov, Z., Seibold, C., Schneider, D., & Stiefelhagen, R. (2022). A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding. 2022 IEEE Intelligent Vehicles Symposium (IV), 1438-1444.

  15. Roitberg, A., Peng, K., Schneider, D., Yang, K., Koulakis, M., Martínez, M., & Stiefelhagen, R. (2022). Is My Driver Observation Model Overconfident? Input-Guided Calibration Networks for Reliable and Interpretable Confidence Estimates. IEEE Transactions on Intelligent Transportation Systems, 23, 25271-25286.