SLAM Algorithm Engineer at Peng Cheng Laboratory
Currently, I am a SLAM Algorithm Engineer at Peng Cheng Laboratory. I received my Master degree in Computer Science at Vanderbilt University, under the supervision of Professor Xenofon Koutsoukos and Professor Richard Alan Peters.
Prior to joining Vanderbilt University, I got my bachelor degree in Computer Science from University of Missour-Columbia, supervised by Professor Yi Shang .
My current research interests cover various topics in Vision Inertial Odometry, Mobile Robot Localization, and Simultaneous Localization And Mapping(SLAM).
I'm looking for Ph.D. programs starting at Fall 2020, with special interests in SLAM, Computer Vision.
In this work, we consider a stereo visual inertial SLAM system with the fusion of the measurements from multi-cameras to improve the system robustness and accuracy.
To cope with the tradeoff between the realtime operation and the robustness and accuracy of the system, we split the SLAM system into a tracking client-end and a cloud-based optimization server-end.
The client is deployed on the mobile device that tracks the feature points in the frames of both cameras and communicates with the server; the server is deployed on the cloud that deals with the complex back-end optimization with multi-sensor fusion,
and it sends the computation results back to the client.
This scheme greatly reduces the computation burden of the mobile devices and meanwhile improves the system robustness and accuracy by the fusion of multiple sensors’ measurements,
which well resolved the tradeoff between the two.
The open-source C++ code will soon be released at Github on March 2020.
In this project, we propose a state-of-the-art Stereo Visual-Inertial SLAM system named Stereo Viusal-Inertial Fusion(Stereo-VIF), which is based on ORB-SLAM2 framework.
The system includes Viusal-Inerial alignment, Viusal-Inertial system initialization and local window-based tightly-coupled Viusal-Inertial system optimization.
The video shows a simple demo of running Stereo-VIF system on Euroc dataset.
The open-source C++ code.
This project implements a visual SLAM system based on Zed Stereo Camera and is deployed on a real race car(F1/10th). The figure shows the trajectory of the RGB-D SLAM and feature points on the track (set up at Featheringill Hall Lab 434, Vanderbilt University).
In this project, we propose a system which involves designing, building and testing an autonomous 1/10-scale model F1/10th racecar using the NVIDIA Jetson TK1 platform for adaptive cruise control. We applied Reinforcement learning(RL) algorithms to train vehicles to drive for a safe distance to the human-controlled leading vehicle, and successfully implemented the whole system into a real F1/10th racecar.
Project Code are available.
In this project, we opt for color detection over object detection. The system consists of a Zed 2K Stereo Camera mounted on an autonomous vehicle(F1/10th Car Platform) for tracking a moving object. Both video processing, image processing algorithm and PID control algorithm are embedded in Nvidia TK1 board.
WorldTrekker is a fitness and virtual travel app that allows wanderlusters to see the world with every step. WorldTrekker seeks to satisfy the needs of those wanderlusters who do not have the time, money or resources to do the amount of traveling they wish to do.