Home | Legals | Data Protection | Sitemap | KIT
Daniel Koester

Dipl.-Inform. Daniel Koester

Research Assistant, PhD Student
Assistive Technologies for the Blind
Room: 011
Phone: +49 721 608-46266
Fax: +49 721 608-45939
daniel koesterOvh9∂kit edu


An inner guidance shoreline (a building's facade) detected by a computer vision algorithm. The gap between two facades is connected by a virtual shoreline to aide in traversing the gap.
Visual Shoreline Detection for Blind and Partially Sighted People
Daniel Koester, Tobias Allgeyer and Rainer Stiefelhagen
International Conference on Computers Helping People with Special Needs (ICCHP)
Linz, Austria, July 2018
bib pdf slides
Currently existing navigation and guidance systems do not properly address special guidance aides, such as the widely used white cane. Therefore, we propose a novel shoreline location system that detects and tracks possible shorelines from a user's perspective in an urban scenario. Our approach uses three dimensional scene information acquired from a stereo camera and can potentially inform a user of available shorelines as well as obstacles that are blocking an otherwise clear shoreline path, and thus help in shorelining. We evaluate two different algorithmic approaches on two different datasets, showing promising results. We aim to improve a user's scene understanding by providing relevant scene information and to help in the creation of a mental map of nearby guidance tasks. This can be especially helpful in reaching the next available shoreline in yet unknown locations, e.g., at an intersection or a drive-way. Also, knowledge of available shorelines can be integrated into routing and guidance systems and vice versa.
An exemplary city map, where the most accessible routes are highlighted in a red heatmap style.
Mind the Gap: Virtual Shorelines for Blind and Partially Sighted People
Daniel Koester, Maximilian Awiszus and Rainer Stiefelhagen
International Conference on Computer Vision Workshop (ICCV) on Assistive Computer Vision and Robotics (ACVR)
Venice, Italy, October 2017
bib pdf poster slides
Blind and partially sighted people have encountered numerous devices to improve their mobility and orientation, yet most still rely on traditional techniques, such as the white cane or a guide dog. In this paper, we consider improving the actual orientation process through the creation of routes that are better suited towards specific needs. More precisely, this work focuses on routing for blind and partially sighted people on a shoreline like level of detail, modeled after real world white cane usage. Our system is able to create such fine-grained routes through the extraction of routing features from openly available geolocation data, e.g., building facades and road crossings. More importantly, the generated routes provide a measurable safety benefit, as they reduce the number of unmarked pedestrian crossings and try to utilize much more accessible alternatives. Our evaluation shows that such a fine-grained routing can improve users' safety and improve their understanding of the environment lying ahead, especially the upcoming route and its impediments.
A stixel representation (colored vertical bars) depicting depth information for a typical urban area roadscene.
Using Technology Developed for Autonomous Cars to Help Navigate Blind People
Manuel Martinez, Alina Roitberg, Daniel Koester, Boris Schauerte and Rainer Stiefelhagen
International Conference on Computer Vision Workshop (ICCV) on Assistive Computer Vision and Robotics (ACVR)
Venice, Italy, October 2017
bib pdf
Autonomous driving is currently a very active research area with virtually all automotive manufacturers competing to bring the first autonomous car to the market. This race leads to billions of dollars being invested in the development of novel sensors, processing platforms, and algorithms. In this paper, we explore the synergies between the challenges in self-driving technology and development of navigation aids for blind people. We aim to leverage the recently emerged methods for self-driving cars, and use it to develop assistive technology for the visually impaired. In particular we focus on the task of perceiving the environment in real-time from cameras. First, we review current developments in embedded platforms for real-time computation as well as current algorithms for image processing, obstacle segmentation and classification. Then, as a proof-of-concept, we build an obstacle avoidance system for blind people that is based on a hardware platform used in the automotive industry. To perceive the environment, we adapt an implementation of the stixels algorithm, designed for self-driving cars. We discuss the challenges and modifications required for such an application domain transfer. Finally, to show its usability in practice, we conduct and evaluate a user study with six blindfolded people.
Aerial imagery of a roundabout with multiple zebra crossings as well as computer vision detections.
Zebra Crossing Detection from Aerial Imagery Across Countries
Daniel Koester, Björn Lunt and Rainer Stiefelhagen
International Conference on Computers Helping People with Special Needs (ICCHP)
Linz, Austria, July 2016
bib pdf slides
We propose a data driven approach to detect zebra crossings in aerial imagery. The system automatically learns an appearance model from available geospatial data for an examined region. HOG as well as LBPH features, in combination with a SVM, yield state of the art detection results on different datasets. We also use this classifier across datasets obtained from different countries, to facilitate detections without requiring any additional geospatial data for that specific region. The approach is capable of searching for further, yet uncharted, zebra crossings in the data. Information gained from this work can be used to generate new zebra crossing databases or improve existing ones, which are especially useful in navigational assistance systems for visually impaired people. We show the usefulness of the proposed approach and plan to use this research as part of a larger guidance system.
A coulorized image of an urban sidewalk, where the colors depict (correnct/wrong) accessible area detection.
Way to Go! Detecting Open Areas Ahead of a Walking Person
Boris Schauerte, Daniel Koester, Manuel Martinez and Rainer Stiefelhagen
European Conference on Computer Vision Workshop (ECCV) on Assistive Computer Vision and Robotics (ACVR)
Zurich, Switzerland, September 2014
bib pdf
We determine the region in front of a walking person that is not blocked by obstacles. This is an important task when trying to assist visually impaired people or navigate autonomous robots in urban environments. We use conditional random fields to learn how to interpret texture and depth information for their accessibility. We demonstrate the effectiveness of the proposed approach on a novel dataset, which consists of urban outdoor and indoor scenes that were recorded with a handheld stereo camera.
A test parcour made of multiple, randomly arranged, chairs with yellow markers attached to their backside.
Cognitive Evaluation of Haptic and Audio Feedback in Short Range Navigation Tasks
Manuel Martinez, Angela Constantinescu, Boris Schauerte, Daniel Koester and Rainer Stiefelhagen
International Conference on Computers Helping People with Special Needs (ICCHP)
Paris, France, July 2014
bib pdf slides
Assistive navigation systems for the blind commonly use speech to convey directions to their users. However, this is problematic for short range navigation systems that need to provide fine but diligent guidance in order to avoid obstacles. For this task, we have compared haptic and audio feedback systems under the NASA-TLX protocol to analyze the additional cognitive load that they place on users. Both systems are able to guide the users through a test obstacle course. However, for white cane users, auditory feedback results in a 22 times higher cognitive load than haptic feedback. This discrepancy in cognitive load was not found on blindfolded users, thus we argue against evaluating navigation systems solely with blindfolded users.
A typical narrow sidewalk situation with a parked car and a bicycle leaning on the building's facade, colorized by accessible section detection.
Accessible Section Detection for Visual Guidance
Daniel Koester, Boris Schauerte and Rainer Stiefelhagen
IEEE Workshop on Multimodal and Alternative Perception for Visually Impaired People (MAP4VIP) In Conjunction with International Conference on Multimedia and Expo (ICME)
San Jose, USA, June 2013
bib pdf slides bvs bvs-modules
We address the problem of determining the accessible section in front of a walking person. In our definition, the accessible section is the spatial region that is not blocked by obstacles. For this purpose, we use gradients to calculate surface normals on the depth map and subsequently determine the accessible section using these surface normals. We demonstrate the effectiveness of the proposed approach on a novel, challenging dataset. The dataset consists of urban outdoor and indoor scenes that were recorded with a handheld stereo camera.
An overview of the BVS framework's components: client > BVS (Control, Loader, Logsystem)-(Config, Logger, Connector) < Module.
A Guidance and Obstacle Evasion Software Framework for Visually Impaired People
Daniel Koester
Diploma Thesis
Information about the environment is desired in several applications, for example autonomous robots and support systems for visually impaired persons. Like with most scenarios where a human being uses a support system, reliability is of utmost importance. This creates a high demand for performance and robustness in real-world settings. Many systems created towards this purpose cannot cope with constraints such as platforms with a large amount of uncontrolled ego-motion and the need for real-time processing of information and are thus not feasible for this specific situation.

The topic of this thesis is a novel framework to create vision based support systems for visually impaired persons. It consists of a modular, easily extendable and highly agile software system. Furthermore, a ground detection system is created to aid in mobile navigation scenarios. The system calculates the accessible section by relying on the assumption that the orientation of a given plane segment can be calculated using a stereo camera reconstruction process.

Many frameworks have been created to simplify the developing process of large and complex systems and to foster collaboration among researchers. Usually, such frameworks would be created towards a certain purpose, for example a robotic application. In such a scenario, many elements are needed to manage the components of the robotic platform, such as motor controls. This creates dependencies on the availability of specific building blocks and induce great overhead if such components are not needed. Thus, the created framework imposes no restrictions on its use case by moving such functionality into modular components.

In computer vision many features and algorithms to detect ground plane exist. Some of these are quite costly to calculate, for example segmentation based algorithms. Others use a RANSAC based approach that shows problems in situations where the existing ground plane only accounts for a small part of the examined input data. To alleviate these problems a simple, yet robust, feature is proposed which consists of a gradient detection in the stereo reconstruction data. The gradient of a region in the disparity map correlates directly with the orientation of a surface in the real world. Since the gradient calculation is not complex, a fast and reliable computation of the accessible section becomes possible.

To evaluate the proposed ground detection system, a dataset was created. This dataset consists of 20 videos recorded with a hand held camera rig and contains a high degree of camera ego-motion to simulate a system worn by a pedestrian. The accessible section detection based on the gradient calculation shows promising results.


An exemplaryr image for the flowerbox dataset: a narrow passage between a concrete flowerbox and a hedge, connected to a sidewalk with parked cars next to it.
Daniel Koester
A dataset for accessible section detection and obstacle avoidance, recorded with a handheld stereo camera rig subject to strong ego-motion, 20 videos of varying length covering common urban scenes.
flowerbox.zip (2GB)
Please refer to following description.
Alley: Pedestrian walkway with flowerboxes on both sides as well as some parked bicycles.
Please refer to following description.
Alley (Leveled): Like alley, but the camera is approximately leveled to the horizon.
Please refer to following description.
Bicycle: Bicycle driving up front for a few seconds, a small box obstacle as well as a lamp post and a slope in the end.
Please refer to following description.
Car: Navigation around a tree, afterwards along a narrow path between a low hedge and a car.
Please refer to following description.
Corridor: An indoor scene of a long corridor with doors to adjacent offices.
Please refer to following description.
Fence: A sidewalk with a low fencing on the left side and parked cars to the right.
Please refer to following description.
Flowerbox: Navigation between low flowerboxes with tall coppice and some parked cars.
Please refer to following description.
Hedge: A sidewalk with a tall hedge along one side, some poles at start and end.
Please refer to following description.
Ladder: A ladder like sculpture with wide horizontal beams, used as an edge case.
Please refer to following description.
Narrow: A narrow sidewalk between parked cars and bicycles as well as lamp posts.
Please refer to following description.
Pan: Horizontal pan of a parking area with flowerboxes and parked car.
Please refer to following description.
Passage: Passageway containing some flowerboxes, a few parked bicycles and a door.
Please refer to following description.
Railing: A railing blocking the path and a post along the sidewalk.
Please refer to following description.
Ramp: A large ramp towards the street with three poles on the side of the road.
Please refer to following description.
Ridge: A dead end on a narrow walkway between two steep slopes with a wall at the end.
Please refer to following description.
Sidewalk: A typical sidewalk scene, some parked bicycles and cars, a person walking towards the camera.
Please refer to following description.
Sidewalk 2: The continuation of the Sidewalk video, similar situation, bicycles parked on both sides.
Please refer to following description.
Sidewalk (Leveled): Similar situation as in Sidewalk, with the camera leveled at horizon.
Please refer to following description.
Sign: A tall sign at the side of the pavement, used as an edge case.
Please refer to following description.
Street: A walk on the street between parked cars and a passing cyclist.

Hiwi-Jobs, Bachelor/Master Thesis

  • Navigation Systems for the Visually Impaired [pdf]


Projects and Awards

  • 2016/07-2019/06 TERRAIN: “Selbstständige Mobilität blinder und sehbehinderter Menschen im urbanen Raum durch audio-taktile Navigation” (KIT News DE|EN)
  • 2016/07 Best Praktikum Award for “Computer-Vision for Human-Computer Interaction” Praktikum in SS2015 (KIT News)
  • 2014/02-2017/01 “AVVIS: Artificial Vision for Assisting Visually Impaired in Social Interaction”
  • 2013/08 Google Research Award: “A Mobility and Navigational Aid for Visually Impaired Persons” (KIT News)

Supervised Theses

  • 2018 “Entwicklung eines Assistenzsystems zur sicheren Überquerung von Straßenübergängen für Menschen mit Sehschädigung” (MA)
  • 2018 “Tiefenbasierte Leitlinienerkennung im urbanen Raum für Menschen mit Sehschädigung” (MA)
  • 2017 “Virtuelle Leitliniengenerierung für Menschen mit Sehschädigung im urbanen Raum” (BA)
  • 2016 “Zebra-Crossing Detection for the Visually Impaired” (DA)
  • 2015 “Detector Evaluation for Pedestrian Crosswalk Guidance Systems” (MA)
  • 2014 “Barcode Detection Using the Modified Census Transform” (SA)
  • 2014 “Erweiterung eines Computer-Vision Frameworks zur Entwicklung für Android” (SA)