In October 2020 I joined the Mixed Reality and AI Microsoft Lab at Zurich. Before that I was working at the Autonomous Systems Lab (ASL) at ETH-Zurich, where I was Deputy Director. I did my PhD at the Australian Centre for Field Robotics (ACFR), the University of Sydney.
At ETH, the aim of my research was to enable robots to perform physical interactions with their environment in a safe and autonomous manner. Towards that goal, my research was focused on (i) building suitable environment representations for interaction, and (ii) developing tighter connections between perception, and control-and-planning. This page is short, but hopefully, you find all you need in the links provided below.
(I'm not updating this at the moment).
We are the project coordinator for Flourish. The goal of the Flourish project is to bridge the gap between the current and desired capabilities of agricultural robots by developing an adaptable robotic solution for precision farming. By combining the aerial survey capabilities of a small autonomous multi-copter Unmanned Aerial Vehicle (UAV) with a multi-purpose agricultural Unmanned Ground Vehicle (UGV), the system will be able to survey a field from the air, perform targeted intervention on the ground, and provide detailed information for decision support, all with minimal user intervention. The system can be adapted to a wide range of farm management activities and different crops by choosing different sensors, status indicators and ground treatment packages. The gathered information can be used alongside existing precision agriculture machinery, for example, by providing position maps for fertiliser application.
UP-Drive aims to address the outlined transport-related challenges by providing key contributions that will enable gradual automation of and collaboration among vehicles – and as a result facilitate a safer, more inclusive and more affordable transportation system.
In order to adequately address this complexity UP-Drive will focus on advancing the following technologies:
As a result of this strategy UP-Drive expects a significant technological progress that will benefit all levels of automation: from driver assistance in the short-term to full automation in longer term – across a broad range of applications.
With ageing infrastructure in developing-and-developed countries, and with the gradual expansion of distributed installations, the costs of inspection and repair tasks have been growing vastly and incessantly. To address this reality, a major paradigm shift is required, in order to procure the highly automated, efficient, and reliable solutions that will not only reduce costs, but will also minimize risks to personnel and asset safety. AEROWORKS envisions a novel aerial robotic team that possesses the capability to autonomously conduct infrastructure inspection and maintenance tasks, while additionally providing intuitive and user-friendly interfaces to human-operators.
The AEROWORKS robotic team will consist of multiple heterogeneous “collaborative Aerial Robotic Workers”, a new class of Unmanned Aerial Vehicles equipped with dexterous manipulators, novel physical interaction and co-manipulation control strategies, perception systems, and planning intelligence. This new generation of worker-robots will be capable of autonomously executing infrastructure inspection and maintenance works.
The AEROWORKS multi-robot team will operate in a decentralized fashion, and will be characterized by unprecedented levels of reconfigurability, mission dependability, mapping fidelity, and manipulation dexterity, integrated in robust and reliable systems that are rapidly deployable and ready-to-use as an integral part of infrastructure service operations.
As the project aims for direct exploitation in the infrastructure services market, its results will be demonstrated and evaluated in realistic and real infrastructure environments, with a clear focus on increased Technology Readiness Levels. The accomplishment of the envisaged scenarios will boost the European infrastructure sector, contribute to the goal of retaining Europe’s competitiveness, and particularly impact our service and industrial robotics sector, drastically changing the landscape of how robots are utilised.
The The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) is a robotics competition involving 3 challenges. We participate in the 3 of them. For more information, please see the official website.
EuRoc: Challenge 3
This challenge aims at targeting the open problems in existing MAV solutions (especially in multicopters) to enable their deployment in real life scenarios. MAVs are naturally unstable platforms exhibiting great agility and they thus require a trained pilot to operate them, while being restricted to line‐of‐sight range. The scenario is the demonstration of high‐level tele‐operation of a single MAV for an inspection task. The goal is to enable an inspection expert untrained in piloting MAVs (e.g. trained boiler inspector) to tele‐operate a MAV as an aid to his/her mission, while being able to focus on the inspection task at hand. This has never been possible before as the complexity of MAV control has been too high, requiring expert piloting skills so far. The participants will need to set up a framework to perform localization and state estimation of the MAV without any external positioning systems (e.g. GPS or motion capturing systems), perform autonomous local obstacle avoidance, local path planning, following of structures under restricting time constraints and homing of the MAV.
Previous projects at ACFR
One of the most difficult problems in building multi-modal and/or multi-temporal representations of the environment is accurate data registration. In multi-modal mapping, to integrate the information provided by two sensors, their relative location and orientation must be known. For precision applications this location cannot be found by simply measuring the sensor's positions because of the uncertainty introduced in the measurement process and in the actual sensor location inside the casing. The registration is far from trivial due to the very different modalities via which the two sensors may operate. This project investigates new similarity metrics to automate the registration of multi-modal sensed data. This project focus on the registration of single-scan data, for example taken with sensors mounted on tripods as usually done for surveys in mining. Although the algorithms developed so far are generic, we have been applying them to registration of hyperspectral images and Lidar.
This project is part of Zachary Taylor's PhD. For more information please check his webpage at http://www.zjtaylor.com
Automated calibration of multi-sensor platforms
Mobile platforms typically carry many sensors of different modality. To produce composite images of the environment the sensors need to be calibrated. On most mobile robots the sensors are calibrated by either hand labelling points or placing markers such as corner reflectors or chequerboards in the scene. The calibration produced by these methods, while initially accurate, is quickly degraded due to the motion of the robot. For mobile robots working on topologically variable environments as found in mining and agriculture, this can result in significantly degraded calibration after as little as a few hours of operation. For long-term autonomy, as needed to monitor large areas with UGV, calibration parameters will have to be determined efficiently and dynamically updated over the lifetime of the robot. Under these conditions, marker-based calibration becomes impractical. This project investigates methods for automated calibration and update of the extrinsic parameters of multi-sensor mobile systems. This project is part of Zachary Taylor's PhD. For more information please check his webpage at http://www.zjtaylor.com
Per-pixel Radiometric Calibration
Hyperspectral imaging is a very active and promising area of research. This project focusses on field-based hyperspectral, with the sensors mounted on mobile platforms. Unlike remote sensing, field-based sensing can provide environment descriptions at an ultra-fine resolution. This project aims to provide sufficiently detailed maps to enable the construction of spectra references for different geochemical components of the environment.
To achieve this, the project will require to investigate solutions for two big challenges: in-field calibration of hyperspectral images and classification of relevant information from large and complex collections of data, as delivered by hyperspectral cameras.
As part of his PhD studies, Rishi Ramakrishnan is investigating the use of sky models, combined with geometry information for per-pixel radiometric calibration. The system we aim to develop is based on registered Lidar and hyperspectral data as provided by our automated multi-modal mapping framework. The Lidar will provide the geometry of the scene for direct and indirect illumination to be estimated. This research will provide one of the first instances of model-based calibration in this context. For more information please check Rishi's webpage.
Unsupervised Learning of Motion Patterns
Object detection, tracking and classification serve as basic components for the different perception tasks of autonomous robots. In most of the cases, the objects that an autonomous robot interact with, move under constraints defined by the environment and an underlying class-dependent behaviour.
This project investigates methods for simultaneous learning, tracking and classification of moving objects. We have developed algorithms for multi-object tracking which can provide estimates of the dynamic state of the objects along with their class identities. The estimated identities provide information about the objects’ behaviour, that can be used for high level reasoning tasks. The joint estimation of class assignments, dynamic states and data associations results in a computationally intractable problem. Our approach tackle the problem by using a variational approximation.
This project is part of Victor Cano's PhD.
In addition to mining, I also work on perception problems related to agricultural automation. We have been conducting research in autonomous, remote sensing and developing robotics and intelligent software for the environment and agriculture community over the last 10 years. We have a range of projects, using Unmanned Air and Ground Vehicles, for applications in vegetable farming, tree and specialty crops, dairy and cattle, weed management and more.
Check our webpage here: Agriculture Robotics at ACFR