The workshop schedule follows. Selecting a presenter reveals their title and abstract.
Session 1: Vision for Autonomous Navigation and Place Recognition
9:20 - 9:30: Introduction
9:30 - 9:50: A. Bachrach and N. Roy, MIT
Title: Moving Beyond Lasers: Visual Navigation for Micro Air Vehicles
Abstract: Recent advances in state estimation, planning, mapping, and control algorithms combined with ever-increasing computational power available onboard micro air vehicles have enabled a number of researchers to demonstrate advanced autonomous capabilities in GPS-denied environments. Much of this work has focused around the use of laser range-finder sensors, however the use of a 2D sensor in a 3D environment requires a number of assumptions that limit the ability of the MAV to operate in unconstrained environments.
In this talk, we describe a system for visual odometry and mapping using a stereo or RGB-D camera. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight and planning in cluttered environments using only onboard sensor data. However, the planning capabilities are limited to the relatively short sensing horizon provided by range sensors. We will discuss recent work that enables long-range goal-directed exploration and planning in unknown environments. Our system uses visual information from a monocular camera to predict potential trajectories for the vehicle that extend beyond the metric range allowed by range sensors.
Abstract: Recent advances in state estimation, planning, mapping, and control algorithms combined with ever-increasing computational power available onboard micro air vehicles have enabled a number of researchers to demonstrate advanced autonomous capabilities in GPS-denied environments. Much of this work has focused around the use of laser range-finder sensors, however the use of a 2D sensor in a 3D environment requires a number of assumptions that limit the ability of the MAV to operate in unconstrained environments.
In this talk, we describe a system for visual odometry and mapping using a stereo or RGB-D camera. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight and planning in cluttered environments using only onboard sensor data. However, the planning capabilities are limited to the relatively short sensing horizon provided by range sensors. We will discuss recent work that enables long-range goal-directed exploration and planning in unknown environments. Our system uses visual information from a monocular camera to predict potential trajectories for the vehicle that extend beyond the metric range allowed by range sensors.
9:50 - 10:10: S. Grzonka, B. Steder, and W. Burgard, University of Freiburg
Title: 3D Place Recognition and Object Detection using a Small-sized Quadrotor
Abstract: We present a system for 3D place recognition and object detection using a small sized quadrotor. The robot is equipped with a horizontally scanning 2D range-scanner and occasionally acquires 3D scans by hovering on the spot and changing its altitude. Our approach is able to accurately and robustly recognize previously seen places of the environment. Additionally, our system can be applied to match the current observations to models stored in a database which allows the robot to perform object detection. We evaluate our approach in real-world experiments to demonstrate the robustness and reliability of our algorithms.
Abstract: We present a system for 3D place recognition and object detection using a small sized quadrotor. The robot is equipped with a horizontally scanning 2D range-scanner and occasionally acquires 3D scans by hovering on the spot and changing its altitude. Our approach is able to accurately and robustly recognize previously seen places of the environment. Additionally, our system can be applied to match the current observations to models stored in a database which allows the robot to perform object detection. We evaluate our approach in real-world experiments to demonstrate the robustness and reliability of our algorithms.
10:10 - 10:30: Discussion
10:30 - 10:45: Coffee Break
Session 2: Autonomous Mapping, Exploration, and Surveillance
10:45 - 11:05: S. Scherer and S. Singh, CMU
Title: Perception for a River Mapping Micro Aerial Vehicle
Abstract: Rivers in areas with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that GPS may not be available depending on the thickness of the surrounding canopy. We present key components of a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision and laser scanning to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision- free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results from each of the three perception subsystems from representative environments.
Abstract: Rivers in areas with heavy vegetation are hard to map from the air. Here we consider the task of mapping their course and the vegetation along the shores with the specific intent of determining river width and canopy height. A complication in such riverine environments is that GPS may not be available depending on the thickness of the surrounding canopy. We present key components of a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. We describe three key components that use computer vision and laser scanning to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collision- free operation, and create a three dimensional representation of the riverine environment. While the ability to fly simplifies the navigation problem, it also introduces an additional set of constraints in terms of size, weight and power. Hence, our solutions are cognizant of the need to perform multi-kilometer missions with a small payload. We present experimental results from each of the three perception subsystems from representative environments.
11:05 - 11:25: N. Michael, University of Pennsylvania
Title: 3D Indoor Exploration with a Computationally Constrained Micro-Aerial Vehicle
Abstract: We present a methodology for exploration in 3D indoor environments with a computation and payload constrained micro-aerial vehicle (MAV). We propose a stochastic differential equation-based exploration strategy, discuss the details of the approach, and provide experimental results demonstrating the successful application of these methods on an aerial vehicle able to explore multi-floor buildings.
Abstract: We present a methodology for exploration in 3D indoor environments with a computation and payload constrained micro-aerial vehicle (MAV). We propose a stochastic differential equation-based exploration strategy, discuss the details of the approach, and provide experimental results demonstrating the successful application of these methods on an aerial vehicle able to explore multi-floor buildings.
11:25 - 11:45: S. Sukkarieh, ACFR
Title: 3D Mapping and Surveillance in Unstructured Outdoor Environments
Abstract: This talk will present some of our latest findings in the development of perception algorithms for the purposes of mapping and classifying 3D unstructured outdoor environments from aerial robots. The talk will present work on bundle adjustment using INS/GPS/Vision and Gaussian Processes as a means to accurately define the terrain and to also produce classification outputs from learnt spatial models. The talk will also present some of our ongoing work in linking decision making to these perception models for the purposes of intelligent surveillance using single (air) and multi-vehicle (air-air and air-ground) systems.
Abstract: This talk will present some of our latest findings in the development of perception algorithms for the purposes of mapping and classifying 3D unstructured outdoor environments from aerial robots. The talk will present work on bundle adjustment using INS/GPS/Vision and Gaussian Processes as a means to accurately define the terrain and to also produce classification outputs from learnt spatial models. The talk will also present some of our ongoing work in linking decision making to these perception models for the purposes of intelligent surveillance using single (air) and multi-vehicle (air-air and air-ground) systems.
11:45 - 12:05: J. Durrie and E. Frew, University of Colorado
Title: Coordinated Persistent Surveillance with Guaranteed Target Bounds
Abstract: This presentation considers the problem of coordinated persistent area surveillance without a priori knowledge of the position or number of targets in the world. Only a maximum speed bound for the targets is assumed known a priori. Possible target motion is formulated as a front propagation problem resulting in guaranteed, deterministic bounds on the targets. Sensor coordination is analyzed, and strategies are derived that guarantee targets cannot enter certain regions from without or escape from within. Finally, a wavefront tracking control using mothership vehicles to coordinate the sensors is presented with simulation results.
Abstract: This presentation considers the problem of coordinated persistent area surveillance without a priori knowledge of the position or number of targets in the world. Only a maximum speed bound for the targets is assumed known a priori. Possible target motion is formulated as a front propagation problem resulting in guaranteed, deterministic bounds on the targets. Sensor coordination is analyzed, and strategies are derived that guarantee targets cannot enter certain regions from without or escape from within. Finally, a wavefront tracking control using mothership vehicles to coordinate the sensors is presented with simulation results.
12:05 - 13:30: Lunch
Session 3: Deployment Strategies for Persistent Surveillance and Operator Interaction
13:30 - 13:50: E. Stump, Army Research Laboratory
Title: Solving UAV Persistent Surveillance Planning as a Vehicle Routing Problem
Abstract: We consider a persistent surveillance problem as one of finding sequences of visits to discrete sites in a periodic fashion and are able to cast it as the classical Vehicle Routing Problem with Time Windows that is well-studied in the operations research community. Using recent advances in combinatorial optimization for solving such logistics problems, we develop a framework for finding optimal allocations of UAVs for persistent surveillance while respecting battery and visit period constraints. Taking a receding-horizon approach, we introduce a methodology for incorporating the continuous and ongoing nature of the scenario into this typically aperiodic problem. We apply these methods to the task of surveying a building multiple quadrotor UAVs and present the results of a small-scale hardware demonstration and a long-term simulation developed to closely mimic the hardware testbed as a precursor to a larger-scale deployment.
Abstract: We consider a persistent surveillance problem as one of finding sequences of visits to discrete sites in a periodic fashion and are able to cast it as the classical Vehicle Routing Problem with Time Windows that is well-studied in the operations research community. Using recent advances in combinatorial optimization for solving such logistics problems, we develop a framework for finding optimal allocations of UAVs for persistent surveillance while respecting battery and visit period constraints. Taking a receding-horizon approach, we introduce a methodology for incorporating the continuous and ongoing nature of the scenario into this typically aperiodic problem. We apply these methods to the task of surveying a building multiple quadrotor UAVs and present the results of a small-scale hardware demonstration and a long-term simulation developed to closely mimic the hardware testbed as a precursor to a larger-scale deployment.
13:50 - 14:10: S. Smith, MIT
Title: Persistent Tasks for Robots in Changing Environments
Abstract: This talk will present recent results in controlling groups of robots to monitor changing environments, and to investigate locations that are specified by users in real-time. In persistent monitoring, we consider robots with limited-range sensor footprints. We plan monitoring trajectories by decoupling the planning of a path from speed control along the path. The speed controllers are guaranteed to keep the uncertainty in the environment bounded. To investigate locations specified by users in real-time, we present dynamic vehicle routing algorithms based on receding-horizon optimization. These algorithms seek to minimize the delay between the time a location is specified and the time that location is investigated. We demonstrate these results in a recent hardware implementation.
Abstract: This talk will present recent results in controlling groups of robots to monitor changing environments, and to investigate locations that are specified by users in real-time. In persistent monitoring, we consider robots with limited-range sensor footprints. We plan monitoring trajectories by decoupling the planning of a path from speed control along the path. The speed controllers are guaranteed to keep the uncertainty in the environment bounded. To investigate locations specified by users in real-time, we present dynamic vehicle routing algorithms based on receding-horizon optimization. These algorithms seek to minimize the delay between the time a location is specified and the time that location is investigated. We demonstrate these results in a recent hardware implementation.
14:10 - 14:30: A. Franchi, Max Planck Institute for Biological Cybernetics
Title: Decentralized Bilateral Aerial Teleoperation of Multiple UAVs - Part I: a Top-Down Perspective
Abstract: This talk will present some recent theoretical and experimental results in the relatively new topic of Bilateral Aerial Teleoperation of Multiple UAVs. In this non-conventional teleoperation field a human operator partially controls the behavior of a semi-autonomous swarm of UAVs by means of one or more haptic interfaces, and receives back a force cue which is informative both of the swarm tracking performance and of some relevant properties of the surrounding environment (e.g., presence of obstacles or other threats). This kind of systems are designed in order to enhance the telepresence of the operator and the quality of the human robot interaction, especially when applied to practical scenarios, like search and rescue, surveillance, exploration and mapping. In particular, the focus of the talk will be on the design of a stable bilateral interconnection between the user and the swarm of UAVs, considered as a deformable object with a given shape (top-down approach) to be achieved with suitable formation control algorithms using either distance-only or bearing-only sensors.
Abstract: This talk will present some recent theoretical and experimental results in the relatively new topic of Bilateral Aerial Teleoperation of Multiple UAVs. In this non-conventional teleoperation field a human operator partially controls the behavior of a semi-autonomous swarm of UAVs by means of one or more haptic interfaces, and receives back a force cue which is informative both of the swarm tracking performance and of some relevant properties of the surrounding environment (e.g., presence of obstacles or other threats). This kind of systems are designed in order to enhance the telepresence of the operator and the quality of the human robot interaction, especially when applied to practical scenarios, like search and rescue, surveillance, exploration and mapping. In particular, the focus of the talk will be on the design of a stable bilateral interconnection between the user and the swarm of UAVs, considered as a deformable object with a given shape (top-down approach) to be achieved with suitable formation control algorithms using either distance-only or bearing-only sensors.
14:30 - 14:50: P. Robuffo Giordano, Max Planck Institute for Biological Cybernetics
Title: Decentralized Bilateral Aerial Teleoperation of Multiple UAVs - Part II: a Bottom-up Perspective
Abstract: In this talk, we will review some recent advancements in the field of Aerial Teleoperation, i.e., how to bilaterally couple a single human operator with a remote fleet of semi-autonomous UAVs which 1) must keep some spatial formation and avoid inter- and obstacle- collisions, and 2) must collectively follow the human commands. The emphasis will be placed on the modeling and control tools needed for establishing such a non-conventional bilateral channel: in particular, we will study how to render the multi-UAV "slave side" a passive system w.r.t. the environment, and how to still enforce global connectivity maintenance despite limited sensing and loss of visibility because of occlusions.
Abstract: In this talk, we will review some recent advancements in the field of Aerial Teleoperation, i.e., how to bilaterally couple a single human operator with a remote fleet of semi-autonomous UAVs which 1) must keep some spatial formation and avoid inter- and obstacle- collisions, and 2) must collectively follow the human commands. The emphasis will be placed on the modeling and control tools needed for establishing such a non-conventional bilateral channel: in particular, we will study how to render the multi-UAV "slave side" a passive system w.r.t. the environment, and how to still enforce global connectivity maintenance despite limited sensing and loss of visibility because of occlusions.
14:50 - 15:10: Discussion
15:10 - 15:30: Coffee Break
Session 4: Information-based Cooperative Control Toward Multi-Robot Surveillance
15:30 - 15:50: H. Huang, M. Vitus, and C. Tomlin, U.C. Berkley
Title: Control and Planning for Complex Scenarios
Abstract: The growing numbers of robots deployed for challenging tasks such as search and rescue, reconnaissance, surveillance, and disaster response pose many difficult problems for control and planning. One such problem is planning safe trajectories through complex environments. To ensure the safety of these planned trajectories, the system cannot be assumed to be deterministic. Rather, the inherit uncertainty of the system must be accounted for explicitly in order to maximize the probability of success of the resulting plan. System uncertainty arises from three main sources: (i) motion uncertainty, (ii) sensing noise and (iii) environment uncertainty. In the first part of the talk, we will present a stochastic motion planning algorithm that accounts for any disturbances (i.e. motion and sensing uncertainty) that aerial vehicles may encounter when trying to navigate a 3D environment. This algorithm can operate in real-time and we plan on performing some flight experiments on our quadrotor testbed over the next month, and we should be able to show this at the workshop. In the second part of the talk, we will present a multi-player game in which a group of pursuers is attempting to capture an evader. These methods can be applied to aerial surveillance, capture, or even air-to-air combat.
Abstract: The growing numbers of robots deployed for challenging tasks such as search and rescue, reconnaissance, surveillance, and disaster response pose many difficult problems for control and planning. One such problem is planning safe trajectories through complex environments. To ensure the safety of these planned trajectories, the system cannot be assumed to be deterministic. Rather, the inherit uncertainty of the system must be accounted for explicitly in order to maximize the probability of success of the resulting plan. System uncertainty arises from three main sources: (i) motion uncertainty, (ii) sensing noise and (iii) environment uncertainty. In the first part of the talk, we will present a stochastic motion planning algorithm that accounts for any disturbances (i.e. motion and sensing uncertainty) that aerial vehicles may encounter when trying to navigate a 3D environment. This algorithm can operate in real-time and we plan on performing some flight experiments on our quadrotor testbed over the next month, and we should be able to show this at the workshop. In the second part of the talk, we will present a multi-player game in which a group of pursuers is attempting to capture an evader. These methods can be applied to aerial surveillance, capture, or even air-to-air combat.
15:50 - 16:10: B. Julian, MIT, and M. Angermann, DLR
Title: Towards a Unifying Information Theoretic Framework for Multi-Robot Exploration and Surveillance
Abstract: In this talk we show our recent work on a mathematical framework for pursuing exploration and surveillance tasks using multiple collaborating robots. Our objective is to ground this framework in the first principles of information theory, and in doing so establish a unifying model that considers the inter-dependencies of system resources pertaining to robot mobility, sensing, and communication. The framework inherently identifies metrics that characterize system performance and provides qualitative understanding of quantitative results. We show that exploration and surveillance can be considered close relatives who differ primarily in boundary conditions and utility functions, and as a result approaches developed for one task can adaptively (or even better simultaneously) achieve goals for the other.
We first focus our work in the area of distributed exploration, with robots forming control actions to steer the system in the direction of increasing utility while using only local information. With sufficient system resources, this exploration approach converges to a steady state configuration which inherently facilitates surveillance. We demonstrate how higher, more centralized levels of cognition, which may be of artificial or human origin, can provide global guidance to improve overall system performance. These additional control inputs adjust the previously mentioned boundary conditions and utility functions and provide the user with the capability to specify emerging monitoring requirements in real-time. We also address the effects of communication constraints relevant to distributed systems, which may stem from limited transmission power or spectral bandwidth.
The results of recently performed indoor and outdoor experiments with several micro aerial vehicles are shown to partially validate the real-world utility of a system inspired from the presented framework. Additionally, properties such as scalability and robustness are discussed in the results of large scale simulations.
Abstract: In this talk we show our recent work on a mathematical framework for pursuing exploration and surveillance tasks using multiple collaborating robots. Our objective is to ground this framework in the first principles of information theory, and in doing so establish a unifying model that considers the inter-dependencies of system resources pertaining to robot mobility, sensing, and communication. The framework inherently identifies metrics that characterize system performance and provides qualitative understanding of quantitative results. We show that exploration and surveillance can be considered close relatives who differ primarily in boundary conditions and utility functions, and as a result approaches developed for one task can adaptively (or even better simultaneously) achieve goals for the other.
We first focus our work in the area of distributed exploration, with robots forming control actions to steer the system in the direction of increasing utility while using only local information. With sufficient system resources, this exploration approach converges to a steady state configuration which inherently facilitates surveillance. We demonstrate how higher, more centralized levels of cognition, which may be of artificial or human origin, can provide global guidance to improve overall system performance. These additional control inputs adjust the previously mentioned boundary conditions and utility functions and provide the user with the capability to specify emerging monitoring requirements in real-time. We also address the effects of communication constraints relevant to distributed systems, which may stem from limited transmission power or spectral bandwidth.
The results of recently performed indoor and outdoor experiments with several micro aerial vehicles are shown to partially validate the real-world utility of a system inspired from the presented framework. Additionally, properties such as scalability and robustness are discussed in the results of large scale simulations.
16:10 - 16:30: M. Schwager, MIT/University of Pennsylvania
Title: Multi-Robot Mapping and Exploration of Environments with Hazards
Abstract: We consider the problem of deploying a network of robotic sensors to map and explore an environment, where the environment presents an adversarial threat to the robots. This may mean that there are adversarial agents in the environment trying to make the sensors fail, or that some regions of the environment itself are dangerous for the sensors, for example due to fire, adverse weather, or caustic chemicals. The robots must move both to avoid the hazards and to provide useful sensor information, although these two objectives may be in conflict with one another. We formulate a probabilistic model, under which a Bayesian filter is derived for estimating both the threats and the environment map on-line. We then propose an algorithm in which the robots follow the mutual information gradient to combine the tasks of mapping, exploration, and hazard avoidance. The algorithm uses an analytic computation of the mutual information gradient. Simulations demonstrate the performance of the algorithm.
Abstract: We consider the problem of deploying a network of robotic sensors to map and explore an environment, where the environment presents an adversarial threat to the robots. This may mean that there are adversarial agents in the environment trying to make the sensors fail, or that some regions of the environment itself are dangerous for the sensors, for example due to fire, adverse weather, or caustic chemicals. The robots must move both to avoid the hazards and to provide useful sensor information, although these two objectives may be in conflict with one another. We formulate a probabilistic model, under which a Bayesian filter is derived for estimating both the threats and the environment map on-line. We then propose an algorithm in which the robots follow the mutual information gradient to combine the tasks of mapping, exploration, and hazard avoidance. The algorithm uses an analytic computation of the mutual information gradient. Simulations demonstrate the performance of the algorithm.
16:30 - 17:00: Panel Discussion