Experience

Senior Robotics Software Engineer

Self-employed, 2023 – today

I am working as a self-employed/freelance robotics consultant, offering my services as a consultant, software engineer, project planner or team lead.

Recently a project has concluded in which I was lead software developer and team lead in a young start-up. I assisted in developing the software stack of a multi-robot system for high-accuracy pick-up and drop-off tasks of pallet-like objects. I was the software architect and main code contributor, and as the team lead, I was responsible to coordinate the team, assign tasks, track progress and review code.


TomTom

Senior Software Engineer, 2019-2023

I joined the team to bring TomTom’s autonomous vehicle “Trillian” on the road. My tasks ranged across the full stack and have included simulation, generation of testing environments and scenarios, hazards and risks analysis, as well as aspects of motion planning and localization. After the autonomous vehicle project, I moved to the TomTom Navigation Unit to work on the lane-level navigation part of the large navigation software stack. I facilitated and led technical discussions and sprint plannings of our 11 people team, as well as coordinated projects across different teams and units. Later I took the opportunity to move to a team in the Autonomous Unit where I was able to work on more robotics/AI oriented tasks again by joining the effort to generate accurate vehicle traces out of sensor derived observations.

Previous Image
Next Image

info heading

info content


Roboception

Senior Software Engineer Contractor, 2016-2019

Hannover Messe Demo, Robot shelf picking with worldmodel application

Roboception GmbH develops perception and manipulation systems that meet the real-time requirements of industrial robotic systems.
I developed a world model to capture the environment sensed by the robot and for object management and collision detection; multiple input sources are used incl. RGB-D images / point clouds and various forms of object detection. The world model was part of a pick and place application demonstrated with a KuKa robot at Hannover Messe 2018.  I was responsible  for coordinating  the work within the team working on  various components around the world model, such as object detection,  robot motion, object database and  customer interface.
In another project, I was responsible for designing and developing leaf detection on plants which were to be cut by a robot. I was in charge of system design, developing the leaf detection and localization part, coordinating the team and communicating with the customer.

Roboception bin picking application using the world model (bottom left screencast overlay)

Open Robotics

Senior Software Engineer Contractor, 2016-2017

Open Robotics are the developers of the Robot Operating System (ROS) and the Gazebo robotic simulator.
I developed a collision test framework for Gazebo to help debugging Gazebo’s physics engines. The framework can run multiple physics engines in parallel to compare results and behavior. Within this project I also contributed to various improvements in Gazebo.
In another project I developed several modules to improve planning and execution of grasp actions in ROS and Gazebo.
The source can be accessed on my github.

Kinova Jaco grasping a cube in Gazebo
Previous Image
Next Image

info heading

info content


Intersect, Sydney

Course developer – cloud computing, 2015

Development of national training materials for the Australian research community as part of The National eResearch Collaboration Tools and Resources (NeCTAR) project. I developed a course for using cloud computing for research, including reading materials and video tutorials.
I previously worked as a tutor for Intersect (2013–2014), teaching Introductory Unix courses, Excel, and workshops for data management and visualization.


University of New South Wales

PhD thesis, 2011-2015

Capabilities in Heterogeneous Multi-Robot Systems

For my PhD thesis I developed a system of robot capabilities which can be used to exchange information and reason about individual robot’s capabilities. A “capability” can for example be a simple physical action like pushing, pulling, grasping, or a computational concept like localization. When such capabilities are parameterized (e.g. how far to push), they can be combined to describe many more complex tasks. I developed a temporal planner which given a goal (e.g. cube at target location) finds an executable plan for the robot to achieve that goal. I worked on a custom-built robot which then demonstrated the execution of a transportation task using my software framework. My PhD thesis can be downloaded here, and a list of my publications is available on this page.

Previous Image
Next Image

info heading

info content


German Aerospace Center

Real-Time large data visualization, 2006-2011

3D Terrain Rendering

Design and development of algorithms and source code for real-time visualization of very large 2.5D and 3D datasets such as very large terrain datasets. The aim was to achieve smooth transitions between levels-of-detail in real-time. Tasks included research of the current state-of-the art and choosing the most suitable techniques to apply; I was responsible for the entire project planning, algorithm and software design, and the implementation. The latter included out-of-core mesh preprocessing, real-time visualization, remote control of the visualization application, and other tools. All programming was done in C++ and OpenGL, using a range of open source libraries.


Airbus Hamburg

Diploma Thesis, 2004-2005

I completed my undergraduate thesis at the R&D department, following an internship in the same department. Topic of the thesis was “Realistic animation of characters in a crowd simulation”. This included the development of a C++ library to perform animation mixing for the character movements. This diploma thesis was awarded as the best of the year.

Motion Mixing at Airbus