Weihan Wang

SLAM Algorithm Engineer at Peng Cheng Laboratory

Email, Github, Linkedin

Currently, I am a SLAM Algorithm Engineer at Peng Cheng Laboratory. I received my Master degree in Computer Science at Vanderbilt University, under the supervision of Professor Xenofon Koutsoukos and Professor Richard Alan Peters.

Prior to joining Vanderbilt University, I got my bachelor degree in Computer Science from University of Missour-Columbia, supervised by Professor Yi Shang .

My current research interests cover various topics in Vision Inertial Odometry, Mobile Robot Localization, and Simultaneous Localization And Mapping(SLAM).

I'm looking for Ph.D. programs starting at Fall 2020, with special interests in SLAM, Computer Vision.

News


  • 2019.12 - In development of a robust client-server cooperative Visual SLAM system with cloud-based optimization for processing complex optimization tasks at a fast speed.
  • 2019.12 - Started SLAM Algorithm Engineer Position at Peng Cheng Laboratory, Shenzhen, China.
  • 2019.09 - Working on developping a Stereo Visual-Interial Odometry SLAM System. Open-source coming soon!
  • 2019.09 - One article titled "模糊虚实界限 混合现实中的实时定位与建图(Simultaneous Localization and Mapping in Mixed Reality)" has been published on Ta Kung Pao 2019.
  • 2019.06 - Started Research Internship at Peng Cheng Laboratory, Shenzhen, China(Chinese Key Laboratory).
  • 2018.02 - One patent titled "一种无线电测向运动模拟训练系统的外部控制器(Appearance of a radiogoniometry system for exercise simulation and training)"[P], CN201820 225910.3 has been published.
  • 2017.02 - Dean's High Honor Roll of College of Engineering at University of Missouri.
  • 2016.12 - Founder of iOS Development Club (IDC) at University of Missouri.

Current Projects



A Robust Client-Server Cooperative Visual SLAM System with Cloud-Based Optimization


In this work, we consider a stereo visual inertial SLAM system with the fusion of the measurements from multi-cameras to improve the system robustness and accuracy. To cope with the tradeoff between the realtime operation and the robustness and accuracy of the system, we split the SLAM system into a tracking client-end and a cloud-based optimization server-end. The client is deployed on the mobile device that tracks the feature points in the frames of both cameras and communicates with the server; the server is deployed on the cloud that deals with the complex back-end optimization with multi-sensor fusion, and it sends the computation results back to the client. This scheme greatly reduces the computation burden of the mobile devices and meanwhile improves the system robustness and accuracy by the fusion of multiple sensors’ measurements, which well resolved the tradeoff between the two.

The open-source C++ code will soon be released at Github on March 2020.



Stereo Viusal-Inertial Fusion(Stereo-VIF) SLAM System


In this project, we propose a state-of-the-art Stereo Visual-Inertial SLAM system named Stereo Viusal-Inertial Fusion(Stereo-VIF), which is based on ORB-SLAM2 framework. The system includes Viusal-Inerial alignment, Viusal-Inertial system initialization and local window-based tightly-coupled Viusal-Inertial system optimization.
The video shows a simple demo of running Stereo-VIF system on Euroc dataset.
The open-source C++ code.



Previous Projects


RGB-D SLAM Application

This project implements a visual SLAM system based on Zed Stereo Camera and is deployed on a real race car(F1/10th). The figure shows the trajectory of the RGB-D SLAM and feature points on the track (set up at Featheringill Hall Lab 434, Vanderbilt University).

Thesis and Project Code are available.

Apdative Cruise Control System

In this project, we propose a system which involves designing, building and testing an autonomous 1/10-scale model F1/10th racecar using the NVIDIA Jetson TK1 platform for adaptive cruise control. We applied Reinforcement learning(RL) algorithms to train vehicles to drive for a safe distance to the human-controlled leading vehicle, and successfully implemented the whole system into a real F1/10th racecar.

Project Code are available.

Moving Object Detection and Tracking

In this project, we opt for color detection over object detection. The system consists of a Zed 2K Stereo Camera mounted on an autonomous vehicle(F1/10th Car Platform) for tracking a moving object. Both video processing, image processing algorithm and PID control algorithm are embedded in Nvidia TK1 board.

Proposal and Project Code are available.

WorldTrekker Application

WorldTrekker is a fitness and virtual travel app that allows wanderlusters to see the world with every step. WorldTrekker seeks to satisfy the needs of those wanderlusters who do not have the time, money or resources to do the amount of traveling they wish to do.

Slides and Project Code are available.