Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, Jose Neira, Ian Reid, John J Leonard, Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age, IEEE Transactions on Robotics, Vol. 32 (6), 2016. (Journal Article)
Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved? |
|
Christian Forster, Luca Carlone, Frank Dellaert, Davide Scaramuzza, On-Manifold Preintegration for Real-Time Visual-Inertial Odometry, IEEE Transactions on Robotics, 2016. (Journal Article)
Current approaches for visual-inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence leading to fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a-posteriori bias correction in analytic form. The second contribution is to show that the preintegrated IMU model can be seamlessly integrated into a visual-inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modelling effort leads to accurate state estimation in real-time, outperforming state-of-the-art approaches. |
|
Roman Kasllin, Peter Fankhauser, Elena Stumm, Zachary Taylor, Elias Müggler, Jeffrey Delmerico, Davide Scaramuzza, Roland Siegwart, Marco Hutter, Collaborative localization of aerial and ground robots through elevation maps, In: International Symposium on Safety, Security, and Rescue Robotics (SSRR), Lausanne, 2016., IEEE, Lausanne, 2016-10-23. (Conference or Workshop Paper published in Proceedings)
Collaboration between aerial and ground robots can benefit from exploiting the complementary capabilities of each system, thereby improving situational awareness and environment interaction. For this purpose, we present a localization method that allows the ground robot to determine and track its position within a map acquired by a flying robot. To maintain invariance with respect to differing sensor choices and viewpoints, the method utilizes elevation maps built independently by each robot’s onboard sensors. The elevation maps are then used for global localization: specifically, we find the relative position and orientation of the ground robot using the aerial map as a reference. Our work compares four different similarity measures for computing the congruence of
elevation maps (akin to dense, image-based template matching) and evaluates their merit. Furthermore, a particle filter is
implemented for each similarity measure to track multiple location hypotheses and to use the robot motion to converge to a unique solution. This allows the ground robot to make use of the extended coverage of the map from the flying robot. The presented method is demonstrated through the collaboration of a quadrotor equipped with a downward-facing monocular camera and a walking robot equipped with a rotating laser range scanner. |
|
Beat Kueng, Elias Müggler, Guillermo Gallego, Davide Scaramuzza, Low-latency visual odometry using event-based feature tracks, In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Institute of Electrical and Electronics Engineers (IEEE), Daejeon, Korea, 2016-10-09. (Conference or Workshop Paper published in Proceedings)
New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional camera and an event-based sensor in the same pixel array. These sensors have great potential for robotics because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called “events”) and synchronous grayscale frames. In this paper, we present a lowlatency visual odometry algorithm for the DAVIS sensor using event-based feature tracks. Features are first detected in the grayscale frames and then tracked asynchronously using the stream of events. The features are then fed to an event-based visual odometry algorithm that tightly interleaves robust pose optimization and probabilistic mapping. We show that our method successfully tracks the 6-DOF motion of the sensor in natural scenes. This is the first work on event-based visual odometry with the DAVIS sensor using feature tracks. |
|
Jeffrey Delmerico, Alessandro Giusti, Elias Müggler, Luca Gambardella, Davide Scaramuzza, "On-the-spot Training" for Terrain Classification in Autonomous Air-Ground Collaborative Teams, In: International Symposium on Experimental Robotics, Springer, New York, 2016-10-03. (Conference or Workshop Paper published in Proceedings)
We consider the problem of performing rapid training of a terrain classifier in the context of a collaborative robotic search and rescue system. Our system uses a vision-based flying robot to guide a ground robot through unknown terrain to a goal location by building a map of terrain class and elevation. However, due to the unknown environments present in search and rescue scenarios, our system requires a terrain classifier that can be trained and deployed quickly, based on data collected on the spot. We investigate the relationship of training set size and complexity on training time and accuracy, for both feature-based and convolutional neural network classifiers in this scenario. Our goal is to minimize the deployment time of the classifier in our terrain mapping system within acceptable classification accuracy tolerances. So we are not concerned with training a classifier that generalizes well, only one that works well for this particular environment. We demonstrate that we can launch our aerial robot, gather data, train a classifier, and begin building a terrain map after only 60 seconds of flight. |
|
Henri Rebecq, Guillermo Gallego, Davide Scaramuzza, EMVS: Event-based Multi-View Stereo, In: British Machine Vision Conference (BMVC), BMVA Press, York, UK, 2016-09-19. (Conference or Workshop Paper published in Proceedings)
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of Event-based Multi-View Stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (i) its ability to respond to scene edges—which naturally provide semidense geometric information without any pre-processing operation—and (ii) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semidense depth maps. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU. |
|
Mathis Kappeler, Place Recognition and Loop Closure for 3D Mapping, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Master's Thesis)
Due to drift, visual odometry systems do not provide a globally consistent map.
SVO by the RPG is a visual odometry system, specialized to use little computational
resources and provides a locally consistent map.
A so called SLAM (Simultaneous localization and mapping) system provides a globally
consistent map, which is needed to perform more tasks that require metric precision
at extended distances.
The goal of this thesis is to lay the foundation needed to
turn SVO into an online SLAM system while preserving the SVO advantages.
We managed to turn SVO into an offline SLAM system, with online potential,
by implementing a place recognition and loop closure producing bundle
adjustment constraints. We used the bag of words method to perform the place
recognition. Furthermore, we evaluated numerous parameters and methods for
a future online implementation.
|
|
Kristin Schläpfer, Optisizer Improving Accuracy of body size measurements, University of Zurich, Faculty of Business, Economics and Informatics, 2016. (Bachelor's Thesis)
This bachelor thesis follows up on the bachelor thesis of Michael Spring, Optisizer Using Apriltag to measure body size, Robotics and Perception group, University of Z¸rich, November 2015. The goal of this thesis is to test the Optisizer application that Michael Spring implemented and to optimise itís performance and functionality so that it is possible to use Optisizer as a children measurement tool in a real-world pre-hospital and clinical setting with highest possible efficiency and accuracy and without safety concerns. |
|
Christian Brändli, Jonas Strubel, Susanne Keller, Davide Scaramuzza, Tobi Delbruck, ELiSeD - an event-based line segment detector, In: International Conference on Event-Based Control, Communication and Signal Processing (EBCCSP), Krakow, Poland, 2016-06-13. (Conference or Workshop Paper published in Proceedings)
Event-based temporal contrast vision sensors such as the Dynamic Vison Sensor (DVS) have advantages such as high dynamic range, low latency, and low power consumption. Instead of frames, these sensors produce a stream of events that encode discrete amounts of temporal contrast. Surfaces and objects with sufficient spatial contrast trigger events if they are moving relative to the sensor, which thus performs inherent edge detection. These sensors are well-suited for motion capture, but so far suitable event-based, low-level features that allow assigning events to spatial structures have been lacking. A general solution of the so-called event correspondence problem, i.e. inferring which events are caused by the motion of the same spatial feature, would allow applying these sensors in a multitude of tasks such as visual odometry or structure from motion. The proposed Event-based Line Segment Detector (ELiSeD) is a step towards solving this problem by parameterizing the event stream as a set of line segments. The event stream which is used to update these low-level features is continuous in time and has a high temporal resolution; this allows capturing even fast motions without the requirement to solve the conventional frame-to-frame motion correspondence problem. The ELiSeD feature detector and tracker runs in real-time on a laptop computer at image speeds of up to 1300 pix/s and can continuously track rotations of up to 720 deg/s. The algorithm is open-sourced in the jAER project. |
|
David Tedaldi, Guillermo Gallego, Elias Müggler, Davide Scaramuzza, Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS), In: International Conference on Event-Based Control, Communication and Signal Processing (EBCCSP), s.n., Krakow, Poland, 2016-06-13. (Conference or Workshop Paper published in Proceedings)
Because standard cameras sample the scene at constant time intervals, they do not provide any information in the blind time between subsequent frames. However, for many high-speed robotic and vision applications, it is crucial to provide high-frequency measurement updates also during this blind time. This can be achieved using a novel vision sensor, called DAVIS, which combines a standard camera and an asynchronous event-based sensor in the same pixel array. The DAVIS encodes the visual content between two subsequent frames by an asynchronous stream of events that convey pixel-level brightness changes at microsecond resolution. We present the first algorithm to detect and track visual features using both the frames and the event data provided by the DAVIS. Features are first detected in the grayscale frames and then tracked asynchronously in the blind time between frames using the stream of events. To best take into account the hybrid characteristics of the DAVIS, features are built based on large, spatial contrast variations (i.e., visual edges), which are the source of most of the events generated by the sensor. An event-based algorithm is further presented to track the features using an iterative, geometric registration approach. The performance of the proposed method is evaluated on real data acquired by the DAVIS. |
|
Reza Sabzevari, Davide Scaramuzza, Multi-body motion estimation from monocular vehicle-mounted cameras, IEEE Transactions on Robotics, Vol. 32 (3), 2016. (Journal Article)
This paper addresses the problem of simultaneous estimation of the vehicle ego-motion and motions of multiple moving objects in the scene—called eoru-motions—through a monocular vehicle-mounted camera. Localization of multiple moving objects and estimation of their motions is crucial for autonomous vehicles. Conventional localization and mapping techniques (e.g. Visual Odometry and SLAM) can only estimate the ego-motion of the vehicle. The capability of robot localization pipeline to deal with multiple motions has not been widely investigated in the literature. We present a theoretical framework for robust estimation of multiple relative motions in addition to the camera ego-motion. First, the framework for general unconstrained motion is introduced and then, it is adapted to exploit the vehicle kinematic constraints to increase efficiency. The method is based on projective factorization of the multiple-trajectory matrix. First, the ego-motion is segmented and, then, several hypotheses are generated for the eoru-motions. All the hypotheses are evaluated and the one with the smallest reprojection error is selected. The proposed framework does not need any a priori knowledge of the number of motions and is robust to noisy image measurements. The method with constrained motion model is evaluated on a popular street-level image dataset collected in urban environments (KITTI dataset) including several relative ego-motion and eoru-motion scenarios. A benchmark dataset (Hopkins 155) is used to evaluate this method with general motion model. The results are compared with those of the state-of-the-art methods considering a similar problem, referred to as the Multi-Body Structure from Motion in the computer vision community. |
|
Stefan Isler, Reza Sabzevari, Jeffrey Delmerico, Davide Scaramuzza, An Information Gain Formulation for Active Volumetric 3D Reconstruction, In: IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers, Stockholm, Sweden, 2016-05-16. (Conference or Workshop Paper published in Proceedings)
We consider the problem of next-best view selection for volumetric reconstruction of an object by a mobile robot equipped with a camera. Based on a probabilistic volumetric map that is built in real time, the robot can quantify the expected information gain from a set of discrete candidate views. We propose and evaluate several formulations to quantify this information gain for the volumetric reconstruction task, including visibility likelihood and the likelihood of seeing new parts of the object. These metrics are combined with the cost of robot movement in utility functions. The next best view is selected by optimizing these functions, aiming to maximize the likelihood of discovering new parts of the object. We evaluate the functions with simulated and real world experiments within a modular software system that is adaptable to other robotic platforms and reconstruction problems. We release our implementation open source. |
|
Zichao Zhang, Henri Rebecq, Christian Forster, Davide Scaramuzza, Benefit of large field-of-view cameras for visual odometry, In: IEEE International Conference on Robotics and Automation (ICRA), Institute of Electrical and Electronics Engineers, Stockholm, Sweden, 2016-05-16. (Conference or Workshop Paper published in Proceedings)
The transition of visual-odometry technology from research demonstrators to commercial applications naturally raises the question: “what is the optimal camera for vision-based
motion estimation?” This question is crucial as the choice of camera has a tremendous impact on the robustness and accuracy of the employed visual odometry algorithm. While many properties of a camera (e.g. resolution, frame-rate, global-shutter/rolling-shutter) could be considered, in this work we focus on evaluating the impact of the camera field-of-view (FoV) and optics (i.e., fisheye or catadioptric) on the quality of the motion estimate. Since the motion-estimation performance depends highly on the geometry of the scene and the motion of the camera, we analyze two common operational environments in mobile robotics: an urban environment and an indoor scene. To confirm the theoretical observations, we implement a state-of-the-art VO pipeline that works with large FoV fisheye and catadioptric cameras. We evaluate the proposed VO pipeline in
both synthetic and real experiments. The experiments point out that it is advantageous to use a large FoV camera (e.g., fisheye or catadioptric) for indoor scenes and a smaller FoV for urban
canyon environments. |
|
Trink Monica, Real Time Eye Tracking Data in the IDE, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2016. (Bachelor's Thesis)
Integrated Development Environments (IDEs) assist developers during their tasks by helping them to work in a more precise way, to improve software quality and to comply with standards. In addition to tools which rather analyse source code than biometric data, approaches have recently been explored where information such as eye coordinates or heart rate was used to analyse developers emotions and to shift supporting tools to a more human centred view.
The goal of this thesis is to implement a new Eclipse plug-in based on the iTrace plug-in which is one of the first plug-ins that collects eye tracking data in the IDE and saves them into files. Our plug-in offers new possibilities by providing useful data within the IDE via interface to others and by transforming eye coordinates into source code elements at runtime. Furthermore, a study was conducted to investigate the plug-ins’ accuracy. Due to the high amount of data we received low precision values which should be improved in the future. Nonetheless, the recall value on context granularity remained the same with ~92% and the F1 score, which was used to evaluate the plug-ins’ accuracy, recorded only small differences with -9% compared to the calibrated mode. The results indicate that depending on the demanded context granularity, a re-calibration is not mandatory. |
|
Christian Forster, Visual Inertial Odometry and Active Dense Reconstruction for Mobile Robots, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2016. (Dissertation)
Using cameras for localization and mapping with mobile robots is appealing as these sensors are small, inexpensive, and ubiquitous. However, since every camera image provides hundred thousands of measurements, it poses a great challenge to infer structure and motion from this wealth of data in real-time on computationally constrained robotic systems. Furthermore, robustness becomes an important factor when applying computer vision algorithms to mobile robots that are moving in uncontrolled environments. In this case, nuisances such as occlusions, illumination changes, or low textured surfaces increase the difficulty to track visual cues, which is fundamental to enable camera-based localization and mapping.
The first contribution of this thesis is an efficient, robust, and accurate visual odometry algorithm that computes the motion of a single camera solely from its stream of images. Therefore, the use of direct methods that operate directly on pixel level intensities is investigated. The advantage of direct methods is that pixel correspondence between images is given directly by the geometry of the problem and can be refined by using the local intensity gradients. However, joint refinement of structure and motion by pixel-wise minimization of intensity differences becomes intractable as the map grows. Therefore, a novel semi-direct approach is proposed that establishes feature correspondence using direct methods and subsequently relies on proven feature-based methods for refinement. We further show how inertial measurements can seamlessly be integrated in the optimization of structure and motion. Therefore, the second contribution of this thesis is a preintegration theory that allows summarizing many inertial measurements between two frames into single relative motion constraints. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Experimental results confirm that our modeling efforts lead to accurate state estimation in real-time, outperforming state-of-the-art approaches.
Tracking salient features in the image results in sparse point clouds; however, for robotic tasks such as path planning, manipulation, or obstacle avoidance, a denser surface representation is needed. Previous work on dense reconstruction from images aim at providing high fidelity reconstructions. However, for robotic applications, the accuracy of the reconstruction should be governed by the interaction task. Furthermore, it is crucial to have a measure of uncertainty in the reconstruction, which aids motion planning and fusion with complementary sensors. This motivates the third contribution of this thesis, which is an efficient algorithm for probabilistic dense depth estimation from a single camera. Therefore, we combine a multi-view and per-pixel-based recursive Bayesian depth estimation scheme with a fast smoothing method that takes into account the estimated depth uncertainty.
While most computer vision approaches fuse depth-maps in a cost volume, care has to be taken in terms of scalability and memory consumption for robotic applications. Therefore, building upon the proposed dense depth estimation, the next contribution of this thesis is a robot-centric elevation mapping system that suits a flying robot with down-looking camera and can be applied on-board Micro Aerial Vehicles (MAVs) for fully autonomous landing-spot detection and landing.
We further demonstrate the usefulness of dense depth-maps for localization of an MAV with respect to a ground robot. Therefore, we address the problem of registering the maps computed by two robots from distant vantage points, using different sensing modalities: a dense 3D reconstruction from the MAV is aligned with the map computed from the depth sensor on the ground robot.
The most exciting opportunity of computer vision for mobile robotics is that robots can exhibit control on the data acquisition process. This motivated the investigation of the following problem: given the image of a scene, what is the trajectory that an MAV-mounted camera should follow to perform optimal dense depth estimation? The last contribution of this thesis addresses this question and introduces a method to compute the measurement uncertainty and, thus, the expected information gain, on the basis of the scene structure and appearance. This results in the MAV to choose motion trajectories that avoid perceptual ambiguities inferred by the texture in the scene. |
|
Gabriele Costante, Christian Forster, Jeffrey Delmerico, Paolo Valigi, Davide Scaramuzza, Perception-aware Path Planning, In: ArXiv.org, No. 1605.04151, 2016. (Working Paper)
In this paper, we give a double twist to the problem of planning under uncertainty. State-of-the-art planners seek to minimize the localization uncertainty by only considering the geometric structure of the scene. In this paper, we argue that motion planning for vision-controlled robots should be perception aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric one, to compute the uncertainty of vision-based localization during path planning. To avoid the caveats of feature-based localization systems (i.e., dependence on feature type and user-defined thresholds), we use dense, direct methods. This allows us to compute the localization uncertainty directly from the intensity values of every pixel in the image. We also describe how to compute trajectories online, considering also scenarios with no prior knowledge about the map. The proposed framework is general and can easily be adapted to different robotic platforms and scenarios. The effectiveness of our approach is demonstrated with extensive experiments in both simulated and real-world environments using a vision-controlled micro aerial vehicle. |
|
Matthias Fässler, Flavio Fontana, Christian Forster, Elias Müggler, Matia Pizzoli, Davide Scaramuzza, Autonomous, Vision-based Flight and Live Dense 3D Mapping with a Quadrotor Micro Aerial Vehicle, Journal of Field Robotics, Vol. 33 (4), 2016. (Journal Article)
|
|
Fabian Weiersmueller, Automated Order Retrieval Of Ultra Thin Brain Sections, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2015. (Bachelor's Thesis)
Numerous scientific fields in neuroscience rely on high resolution maps of brain regions. The traditional way to create such brain maps consists of producing hundreds of consecutive, ultra thin sections of the given region of interest.
The Hahnloser Group at the Institute of Neuroinformatics is currently developing a new method to collect these sections. This new method has many advantages over traditional methods, however, it comes at a cost: we are losing the information about the sections' position inside the stack.
This Bachelor Thesis presents the pipeline we developed to retrieve the sections' true order. This retrieval is not as straight forward as it seems, since one has to deal with many associated challenges, such as missing image alignment and file sizes up to several gigabyte per section image.
The pipeline we developed is able to deal with all those challenges, using various methods of image processing such as SIFT features,
image transformations and normalized cross correlation.
Additionally, this thesis explains the observations, experiments and decisions that led to this pipeline and the methods and algorithms used in it. Finally, it documents our successful attempt of retrieving the order of our own dataset. |
|
Michael Spring, Optisizer Using Apriltag to measure Body Size, University of Zurich, Faculty of Economics, Business Administration and Information Technology, 2015. (Bachelor's Thesis)
In order to determine the right dosage of medication and size of medical equipment, the body height of infant patients is a frequently used indicator. We assume that the reliability and efficiency of measuring the patient's height could be improved by a software running on smartphones and tablets, which are already integrated in daily medical routine. We developed a first prototype: the "Optisizer" is an augmented reality app developed for the Android platform.
It lets the user take a picture of a scene containing a 2D bar code fiducial tag and measure arbitrary distances lying in the plane in which the tag is located by positioning a virtual meter on the picture. To make the Optisizer more reliable we use a numerical approximation to estimate the error and display error bounds to the user. Our app allows measuring a patient's body height reliably within seconds. |
|
Elias Müggler, Nathan Baumli, Flavio Fontana, Davide Scaramuzza, Towards evasive maneuvers with quadrotors using dynamic vision sensors, In: 2015 European Conference on Mobile Robots (ECMR), Institute of Electrical and Electronics Engineers (IEEE), Lincoln, United Kingdom, 2015-10-02. (Conference or Workshop Paper published in Proceedings)
We present a method to predict collisions with objects thrown at a quadrotor using a pair of dynamic vision sensors (DVS). Due to the micro-second temporal resolution of these sensors and the sparsity of their output, the object's trajectory can be estimated with minimal latency. Unlike standard cameras that send frames at a fixed frame rate, a DVS only transmits pixel-level brightness changes (“events”) at the time they occur. Our method tracks spherical objects on the image plane using probabilistic trackers that are updated with each incoming event. The object's trajectory is estimated using an Extended Kalman Filter with a mixed state space that allows incorporation of both the object's dynamics and the measurement noise in the image plane. Using error-propagation techniques, we predict a collision if the 3σ-ellipsoid along the predicted trajectory intersects with a safety sphere around the quadrotor. We experimentally demonstrate that our method allows initiating evasive maneuvers early enough to avoid collisions. |
|