ALTAIR Multi-Robot Search and Rescue System

Background

This project was my senior design project that was sponsored by L3Harris and has continued for the past 3 years. The outline of the project was to design a multi-robot system. Previous legacy senior design projects designed an aerial multi-robot system and a ground multi-robot system. My group and I decided to combine the two into a single air-ground system. I worked in a team of six undergraduates.

Overview

Our system has a drone fly overhead and searches a person in need of rescue. The drone sends video data and other data to a base station computer. The computer performs computationally heavy tasks such as image recognition to locate the target person and fly the drone towards the target autonomously. Once the drone is above the distressed person, it would send the GPS coordinates of the target to the computer which in turn sends those coordinates to the ground vehicle. The ground vehicle then autonomously travels to the person to provide aid. My project focused on the identification and navigation aspect of the system and not on how aid can be provided.

My Work

My work was to create a computer simulation of the ground robot and the environment. Due to the COVID-19 pandemic, I was unsure at what capacity we would be able to build a physical prototype. Since I had experience with ROS and Gazebo, I took charge with the simulation. We decided to use an RC car as the ground vehicle for outdoor traversal.

I successfully created a simulated environment and a simulated robot using open-source ROS resources. The robot also simulated GPS, Inertial Measurement Unit (IMU), laser scanner, and odometry sensors.

I also was in charge of creating the navigation algorithm for the ground vehicle. Again, using open-source ROS resources and packages, I created a navigation algorithm that utilized the sensors and a Kalman filter to localize the robot within a GPS coordinate frame. I integrated the ROS navigation and obstacle avoidance algorithm to work with GPS waypoints and I integrated a plugin that allowed the algorithm to work with car-like steering.

The result was a successful simulation of our autonomous ground robot.

As I approached the last 2 months of the year, my team and I learned that we will be able to meet in person for around 18 hours per week to build a physical prototype. I took a major role in sensor integration during this phase. We purchased GPS and IMU sensors and a brushless motor that included an encoder sensor. We also used a laser scanner that our professor previously owned.

Since I was the lead in ROS, I searched for and implemented the necessary drivers for each sensor. I then modified the navigation algorithm I created from before to work with the physical sensors and the motor speed controller.

I also performed power calculations for the ground robot to know what battery would be needed to run all of the electronics and motors for at least 50 minutes.

Results

At the end of the year, we had a working simulation for the ground robot . The physical prototype was able to use the sensors ,run the navigation algorithm, and drive forwards, however, there were errors with the steering motor. We also had the drone and computer successfully run image recognition and simulate autonomous flying. We ran out of time to fix the steering of the ground vehicle and to fully integrate the entire system but I believe that if we had more time without the pandemic, we would have finished our project fully.