Selected Publications


(* equal contribution, † corresponding author.)

Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation
J. Zhang, K. Yang, H. Shi, S. Reiß, K. Peng, C. Ma, H. Fu, P. Torr, K. Wang, R. Stiefelhagen.
IEEE T-PAMI 2024 Paper Code

CoBEV: Elevating Roadside 3D Object Detection with Depth and Height Complementarity
H. Shi*, C. Peng*, J. Zhang*, K. Yang, Y. Wu, H. Ni, Y, Lin, R. Stiefelhagen, K. Wang.
IEEE T-IP 2024 Paper Code

Open Panoramic Segmentation
J. Zheng, R. Liu, Y. Chen, K. Peng, C. Wu, K. Yang, J. Zhang†, R. Stiefelhagen.
ECCV 2024 Project page Paper Code

Occlusion-Aware Seamless Segmentation
Y. Cao*, J. Zhang*, H. Shi, K. Peng, Y. Zhang, H. Zhang, R. Stiefelhagen, K. Yang.
ECCV 2024 Paper Code

Referring Atomic Video Action Recognition
K. Peng, J. Fu, K. Yang, D. Wen, Y. Chen, R. Liu, J. Zheng, J. Zhang, S. Sarfraz, R. Stiefelhagen, A. Roitberg.
ECCV 2024 Paper Code

RoDLA: Benchmarking the Robustness of Document Layout Analysis Models
Y. Chen, J. Zhang†, K. Peng, J. Zheng, R. Liu, P. Torr, R. Stiefelhagen.
CVPR 2024 Project page Paper Code

MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
J. Zheng*, J. Zhang*, K. Yang, K. Peng, R. Stiefelhagen.
ICRA 2024 ( Best paper finalist on HRI) Project page Paper Code

Delivering Arbitrary-Modal Semantic Segmentation
J. Zhang*, R. Liu*, S. Hao, K. Yang, S. Reiß, K. Peng, H. Fu, K. Wang, R. Stiefelhagen.
CVPR 2023 Project page Paper Code Dataset

Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation
J. Zhang, K. Yang, C. Ma, S. Reiß, K. Peng, R. Stiefelhagen.
CVPR 2022 Paper Code

CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation With Transformers
J. Zhang*, H. Liu*, K. Yang*, X. Hu, R. Liu, R. Stiefelhagen.
IEEE Trans. on Intelligent Transportation Systems ( T-ITS) 2023 Paper Code

Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance
J. Zhang, K. Yang, A. Constantinescu, K. Peng, K. Müller, R. Stiefelhagen.
IEEE Trans. on Intelligent Transportation Systems ( T-ITS) 2022 Paper Code

Exploring Event-Driven Dynamic Context for Accident Scene Segmentation
J. Zhang, K. Yang, R. Stiefelhagen.
IEEE Trans. on Intelligent Transportation Systems ( T-ITS) 2021 Paper Code Dataset

Transfer beyond the Field of View: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation
J. Zhang, C. Ma, K. Yang, A. Roitberg, K. Peng, R. Stiefelhagen.
IEEE Trans. on Intelligent Transportation Systems ( T-ITS) 2021 Paper Code Dataset

360BEV: Panoramic Semantic Mapping for Indoor Bird's-Eye View
Z. Teng*, J. Zhang*†, K. Yang, K. Peng, H. Shi, S. Reiß, K. Cao, R. Stiefelhagen.
WACV 2024 Project page Paper Code Dataset

Trans4Map: Revisiting Holistic Bird's-Eye-View Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers
C. Chen, J. Zhang†, K. Yang, K. Peng, R. Stiefelhagen.
WACV 2023 Paper Code

MatchFormer: Interleaving Attention in Transformers for Feature Matching
Q. Wang*, J. Zhang*, K. Yang, K. Peng, R. Stiefelhagen.
ACCV 2023 Paper Code

Capturing Omni-Range Context for Omnidirectional Segmentation
K. Yang, J. Zhang, S. Reiß, X. Hu, R. Stiefelhagen
CVPR 2021 Paper Code

ISSAFE: Improving semantic segmentation in accidents by fusing event-based data
J. Zhang, K. Yang, R. Stiefelhagen
IROS 2021 Paper Code Dataset

Flying Guide Dog: Walkable Path Discovery for the Visually Impaired Utilizing Drones and Transformer-based Semantic Segmentation
H. Tan, C. Chen, X. Luo, J. Zhang, C. Seibold, K. Yang, R. Stiefelhagen.
IEEE ROBIO 2021 Paper Code Video

HIDA: Towards Holistic Indoor Understanding for the Visually Impaired via Semantic Instance Segmentation with a Wearable Solid-State LiDAR Sensor
H. Liu, R. Liu, K. Yang, J. Zhang, K. Peng, R. Stiefelhagen
ICCV Workshop on Assistive Computer Vision and Robotics ( ACVR) 2021 Paper

Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World
J. Zhang, K. Yang, A. Constantinescu, K. Peng, K. Müller, R. Stiefelhagen
ICCV Workshop on Assistive Computer Vision and Robotics ( ACVR) 2021 Paper Code

DensePASS: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation with Attention-Augmented Context Exchange
C. Ma, J. Zhang, K. Yang, A. Roitberg, R. Stiefelhagen
IEEE ITSC 2021 Paper Code Dataset

Pose2Drone: A Skeleton-Pose-based Framework for Human-Drone Interaction
Z. Marinov, S. Vasileva, Q. Wang, C. Seibold, J. Zhang, R. Stiefelhagen
IEEE EUSIPCO 2021 Paper Code

Panoptic Lintention Network: Towards Efficient Navigational Perception for the Visually Impaired
Wei Mao*, J. Zhang*, K. Yang, R. Stiefelhagen
IEEE RCAR 2021 Paper Code

Perception Framework through Real-Time Semantic Segmentation and Scene Recognition on a Wearable System for the Visually Impaired
Y. Zhang, H. Chen, K. Yang, J. Zhang, R. Stiefelhagen
IEEE RCAR 2021 Paper

Teaching


Lectures

  • Teaching Assistant, Deep Learning for Computer Vision I: Basics, SS 2024
  • Teaching Assistant, Deep Learning for Computer Vision II: Advanced Topics, WS 21/22, WS 22/23, WS 23/24
  • Teaching Assistant, Practical Course: Computer Vision for HCI, WS 20/21, SS 2021, SS 2022
    • Flying Guide Dog: Walkable Path Discovery for the Visually Impaired Utilizing Drones and Transformer-based Semantic Segmentation, Paper Code Video
    • Pose2Drone: A Skeleton-Pose-based Framework for Human-Drone Interaction, Paper Code

  • Teaching Assistant, Seminar: Computer Vision for HCI, WS 20/21, WS 21/22, WS 23/24
  • Teaching Assistant, Seminar: Multimodal Large Language Models, SS 2024

Supervised Theses

  • Junwei Zheng. PhD student at KIT. Vision-Language Navigation. Co-supervised with Prof. Rainer Stiefelhagen.
  • Ruiping Liu. PhD student at KIT. Seamless Efficiency for Visual Perception. Co-supervised with Prof. Rainer Stiefelhagen.
  • Yufan Chen. PhD student at KIT. Document Analysis. Co-supervised with Prof. Rainer Stiefelhagen.
  • Fei Teng. PhD student at HNU. Scene Understanding. Co-supervised with Prof. Kailun Yang.
  • Jiale Wei. Master student. OneBEV: Using One Panoramic Image for Bird’s-Eye-View Semantic Mapping. 2024. Paper Code
  • Xin Jiang. Master student. Unified Vision-Language Models for Assistive Technology. 2024. Paper Code
  • Jonas Schmitt. Master student. Global Hessian-Based Importance Pruning of Neural Networks in Combination with Knowledge Distillation. 2024. Paper Code
  • Daniel Bucher. Master student (co-supervised). Improving Robustness of 3D Semantic Sementation with Transformer-based Fusion and Knowledge Distillation. 2023. Paper
  • Leon Kanstinger. Bachelor student. Improving Accessibility of User Interface in Mobility Assistance Systems. 2023.
  • Fei Teng. Master student (co-supervised). OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation. 2023. Paper Code
  • Zhifeng Teng. Master student. 360BEV: Panoramic Semantic Mapping for Indoor Bird's-Eye View. 2023. Paper Code Page
  • Chang Chen. Master student. Transformer-based Mapping from Egocentric Images to Top-view Semantics for Scene Understanding. 2022. Paper Code Video
  • Qing Wang. Master student. MatchFormer: Interleaving Attention in Transformers for Feature Matching. 2022. Paper Code
  • Chaoxiang Ma. Master student (co-supervised). Unsupervised Domain Adaptation for Panoramic Semantic Segmentation. 2021. Paper Code

Awards


  • KIT KHYS Research Travel Grant, 2024.
  • ICM Future Mobility Grants, 2024.
  • ICRA 2024 HRI Best Paper Finalist, 2024.
  • IFI Program Fellowship of the German Academic Exchange Service (DAAD), 2023.
  • The Best Practical Course, Teaching Award, KIT, Computer Science Faculty, 2021.
  • Services


    • Associate Editor: IEEE RA-L, IEEE IV 2024
    • Journal Reviewer: T-PAMI, T-RO, T-IP, IJCV, TNNLS, T-ITS, RA-L, T-IV, TCSVT, IJHCI, TGRS
    • Conference Reviewer: CVPR, ICCV, ECCV, ICLR, NeurIPS, AAAI, ACMMM, ACCV, WACV, BMVC, ICRA, IROS
    • Workshop Co-organizer: IEEE IV 2022 Workshop on Beyond Supervised Learning.

    Contact