THESIS

author's PDF Download Bibtex information

Computer Vision for Advanced Driver Assistance Systems
W.P. Sanberg
Eindhoven University of Technology
29 October 2020

T-IV 2020

author's PDF Download Bibtex information

ASTEROIDS: A Stixel Tracking Extrapolation-based Relevant Obstacle Impact Detection System
W.P. Sanberg, G. Dubbelman, P.H.N. de With
IEEE Transactions on Intelligent Vehicles
pp. 34-46; 4 May 2020

Click for Abstract
This paper presents a vision-based collision-warning system for ADAS in intelligent vehicles, with a focus on urban scenarios. In most current systems, collision warnings are based on radar, or on monocular vision using pattern recognition. Since detecting collisions is a core functionality of intelligent vehicles, redundancy is essential, so that we explore the use of stereo vision. First, our approach is generic and class-agnostic, since it can detect general obstacles that are on a colliding path with the ego-vehicle without relying on semantic information. The framework estimates disparity and flow from a stereo video stream and calculates stixels. Then, the second contribution is the use of the new asteroids concept as a consecutive step. This step samples particles based on a probabilistic uncertainty analysis of the measurement process to model potential collisions. Third, this is all enclosed in a Bayesian histogram filter around a newly introduced time-to-collision versus angle-of-impact state space. The evaluation shows that the system correctly avoids any false warnings on the real-world KITTI dataset, detects all collisions in a newly simulated dataset when the obstacle is higher than 0.4m, and performs excellent on our new qualitative real-world data with near-collisions, both in daytime and nighttime conditions.

EI-AVM 2019

author's PDF Download Bibtex information

From Stixels To Asteroids: Towards a Collision Warning System using Stereo Vision
W.P. Sanberg, G. Dubbelman, P.H.N. de With
IS&T EI - Autonomous Vehicles and Machines 2019
pp. 34-1.; 14 January 2019; San Francisco, USA

[granted BEST PAPER AWARD]

Click for Abstract
This paper explores the use of stixels in a probabilistic stereo vision-based collision-warning system that can be part of an ADAS for intelligent vehicles. In most current systems, collision warnings are based on radar or on monocular vision using pattern recognition (and ultra-sound for park assist). Since detecting collisions is such a core functionality of intelligent vehicles, redundancy is key. Therefore, we explore the use of stereo vision for reliable collision prediction. Our algorithm consists of a Bayesian histogram filter that provides the probability of collision for multiple interception regions and angles towards the vehicle. This could additionally be fused with other sources of information in larger systems. Our algorithm builds upon the disparity Stixel World that has been developed for efficient automotive vision applications. Combined with image flow and uncertainty modeling, our system samples and propagates asteroids, which are dynamic particles that can be utilized for collision prediction. At best, our independent system detects all 31 simulated collisions (2 false warnings), while this setting generates 12 false warnings on the real-world data.

VISAPP 2018

author's PDF Download Bibtex information

Fast 3D Scene Alignment with Stereo Images using a Stixel-based 3D Model
D.W.J.M. van de Wouw, W.P. Sanberg, G. Dubbelman, P.H.N. de With
Int. Conf. on Computer Vision Theory and Applications (VISAPP)
pp. 250-259; 27-29 January 2018; Funchal, Potugal

Click for Abstract
Scene alignment for images recorded from different viewpoints is a challenging task, especially considering strong parallax effects. This work proposes a diorama-box model for a 2.5D hierarchical alignment approach, which is specifically designed for image registration from a moving vehicle using a stereo camera. For this purpose, the Stixel World algorithm is used to partition the scene into super-pixels, which are transformed to 3D. This model is further refined by assigning a slanting orientation to each stixel and by interpolating between stixels, to prevent gaps in the 3D model. The resulting alignment shows promising results, where under normal viewing conditions, more than 96% of all annotated points are registered with an alignment error up to 5 pixels at a resolution of 1920 × 1440 pixels, executing at near-real time performance (4 fps) for the intended application.

EI-AVM 2017

author's PDF Download Bibtex information

Free-Space Detection with Self-Supervised and Online Trained Fully Convolutional Networks
W.P. Sanberg, G. Dubbelman, P.H.N. de With
IS&T EI - Autonomous Vehicles and Machines 2017 (or find it on arXiv)
pp. 54-61; 31 January 2017; San Francisco, USA

Click for Abstract
Recently, vision-based Advanced Driver Assist Systems have gained broad interest. In this work, we investigate free-space detection, for which we propose to employ a Fully Convolutional Network (FCN). We show that this FCN can be trained in a self-supervised manner and achieve similar results compared to training on manually annotated data, thereby reducing the need for large manually annotated training sets. To this end, our self-supervised training relies on a stereo-vision disparity system, to automatically generate (weak) training labels for the color-based FCN. Additionally, our self-supervised training facilitates online training of the FCN instead of offline. Consequently, given that the applied FCN is relatively small, the free-space analysis becomes highly adaptive to any traffic scene that the vehicle encounters. We have validated our algorithm using publicly available data and on a new challenging benchmark dataset that is released with this paper. Experiments show that the online training boosts performance with 5% when compared to offline training, both for Fmax and AP.

PPNIV 2015

author's PDF Download Bibtex information

Free-Space Detection using Online Disparity-supervised Color Modeling 
W.P. Sanberg, G. Dubbelman, P.H.N. de With
7th IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles (IROS-PPNIV)
pp. 105-110; 28 September 2015; Hamburg, Germany

Click for Abstract
This work contributes to vision processing for intelligent vehicle applications with an emphasis on Advanced Driver Assistance Systems (ADAS). A key issue for ADAS is the robust and efficient detection of free drivable space in front of the vehicle. To this end, we propose a stixel-based probabilistic color-segmentation algorithm to distinguish the ground surface from obstacles in traffic scenes. Our system learns color appearance models for free-space and obstacle classes in an online and self-supervised fashion. To this end, it applies a disparity-based segmentation, which can run in the background of the critical system path and at a lower frame rate than the color-based algorithm. This strategy enables an algorithm without a real-time disparity estimate. As a consequence, the current road scene can be analyzed without the extra latency of disparity estimation. This feature translates into a reduced response time from data acquisition to data analysis, which is a critical property for high-speed ADAS. Our evaluation over different color modeling strategies on publicly available data shows that the color-based analysis can achieve similar (77.6% vs. 77.3% correct) or even better results (4.3% less missed obstacle-area) in difficult imaging conditions, compared to a state-of-the-art disparity-only method.

ITSC 2015

author's PDF Download Bibtex information

Color-based Free-Space Segmentation using Online Disparity-supervised Learning 
W.P. Sanberg, G. Dubbelman, P.H.N. de With
IEEE Int. Conference on Intelligent Transportation Systems (ITSC)
pp. 906-912; 15-18 September 2015; Las Palmas, Canary Islands, Spain

Click for Abstract
This work contributes to vision processing for Advanced Driver Assist Systems (ADAS) and intelligent vehicle applications. We propose a color-only stixel segmentation framework to segment traffic scenes into free, drivable space and obstacles, which has a reduced latency to improve the real-time processing capabilities. Our system learns color appearance models for free-space and obstacle classes in an online and self-supervised fashion. To this end, it applies a disparity-based segmentation, which can run in the background of the critical system path, either with a time delay of several frames or at a frame rate that is only a third of that of the color-based algorithm. In parallel, the most recent video frame is analyzed solely with these learned color appearance models, without an actual disparity estimate and the corresponding latency. This translates into a reduced response time from data acquisition to data analysis, which is a critical property for high-speed ADAS. Our evaluation on two publicly available datasets, one of which we introduce as part of this work, shows that the color-only analysis can achieve similar or even better results in difficult imaging conditions, compared to the disparity-only method. Our system improves the quality of the free-space analysis, while simultaneously lowering the latency and the computational load.

ITSC 2014

author's PDF Download Bibtex information

Extending the Stixel World with Online Self-Supervised Color Modeling for Road-Versus-Obstacle Segmentation 
W.P. Sanberg, G. Dubbelman, P.H.N. de With
IEEE Int. Conference on Intelligent Transportation Systems (ITSC)
pp. 1400-1407; 8-11 October 2014; Qingdao, China

Click for Abstract
This work concentrates on vision processing for ADAS and intelligent vehicle applications. We propose a color extension to the disparity-based Stixel World method, so that the road can be robustly distinguished from obstacles with respect to erroneous disparity measurements. Our extension learns color appearance models for road and obstacle classes in an online and self-supervised fashion. The algorithm is tightly integrated within the core of the optimization process of the original Stixel World, allowing for strong fusion of the disparity and color signals. We perform an extensive evaluation, including different self-supervised learning strategies and different color models. Our newly recorded, publicly available data set is intentionally focused on challenging traffic scenes with many low-texture regions, causing numerous disparity artifacts. In this evaluation, we increase the F-score of the drivable distance from 0.86 to 0.97, compared to a tuned version of the state-of-the-art baseline method. This clearly shows that our color extension increases the robustness of the Stixel World, by reducing the number of falsely detected obstacles while not deteriorating the detection of true obstacles.

SITB 2014

author's PDF Download Bibtex information

Online Self-supervised Learning for Road Detection
M.L.L. Rompen, W.P. Sanberg, G. Dubbelman, P.H.N. de With
WIC/IEEE SP Symposium on Information Theory and Signal Processing in the Benelux,
pp 148-155, 12-13 May 2013, Eindhoven, the Netherlands

Click for Abstract
We present a computer vision system for intelligent vehicles that distinguishes obstacles from roads by exploring online and self-supervised learning. It uses geometric information, derived from stereo-based obstacle detection, to obtain weak training labels for an SVM classifier. Subsequently, the SVM improves the road detection result by classifying image regions on basis of appearance information. In this work, we experimentally evaluate different image features to model road and obstacle appearances. It is shown that using both geometric information and Hue-Saturation appearance information improves the road detection task.

ACIVS 2013

author's PDF Download Bibtex information

Flexible multi-modal graph-based segmentation
W.P. Sanberg, L.Q. Do, P.H.N. de With
Advanced Concepts for Intelligent Vision Systems (ACIVS)
Vol. 8192. Lecture Notes in Computer Science,
pp 492-503, 28-31 October 2013, Poznan, Poland

Click for Abstract
This paper aims at improving the well-known local variance segmentation method by adding extra signal modi and specific processing steps. As a key contribution, we extend the uni-modal segmentation method to perform multi-modal analysis, such that any number of signal modi available can be incorporated in a very flexible way. We have found that the use of a combined weight of luminance and depth values improves the segmentation score by 6.8%, for a large and challenging multi-modal data set. Secondly, we have developed an improved uni-modal texture-segmentation algorithm. This improvement relies on a clever choice of the color space and additional pre- and post-processing steps, by which we have increased the segmentation score on a challenging texture data set by 2.1%. This gain is mainly preserved when using a different data set with worse lighting conditions and different scene types.