|ML Scoring Functions
Molecular modeling has become an essential tool to assist in early stages of drug discovery and development. Molecular docking, scoring, and virtual screening are three such modeling tasks of particular importance in computer-aided drug discovery. They are used to computationally simulate the interaction between small drug-like molecules, known as ligands, and a target protein whose function is to be modulated. Scoring functions (SFs) are typically employed to predict the binding conformation (docking task), binding affinity (scoring task), and binary activity level (screening task) of ligands against a critical protein target in a disease’s pathway. In most molecular docking software packages available today, a generic binding affinity-based (BA-based) SF is invoked for all three tasks to solve three different, but related, prediction problems. The limited predictive accuracies of such SFs in these three tasks have been a major roadblock toward cost-effective drug discovery. The ML, task-specific SFs BT-Score, BT-Dock, and BT-Screen provided in this project significantly improve upon the performance of binding-affinity-based approaches.
|Descriptor Data Bank: Protein Ligand Interactions
The Descriptor Data Bank (DDB) is a data-driven platform on the cloud for facilitating multiperspective modeling of PL interactions. DDB is an open-access hub for depositing, hosting, executing, and sharing descriptor extraction tools and data for a large number of interaction modeling hypotheses. The platform also implements a machine-learning (ML) toolbox for automatic descriptor filtering and analysis and scoring function (SF) fitting and prediction. The descriptor filtering module is used to filter out irrelevant and/or noisy descriptors and to produce a compact subset from all available features.
DCMFlow is a Python module that estimates the parameters of a nested logit model with an arbitrary structure of depth and number of choices. We use the open software library TensorFlow to express the model as a computational graph so that its parameters can be estimated efficiently on CPUs and GPUs using optimizers suitable for small and large datasets. The model is designed to be very simple to build and estimate as shown below. Yet, it provides a flexible approach to express any linear utility function as we will see in this mini-tutorial.
|Self-Driving Car System Integration
The goal of the project is to fully implement with ROS the main modules of an autonomous vehicle: Perception, Planning and Control, which will be tested on Udacity´s Self Driving Car ´Carla´ around a test track using waypoint navigation. For perception, an SSD classifier is used to detect the traffic light state. PID controller is used for heading and acceleration control.
The goal of this project is to identify roads in pictures using semantic segmentation where pixel-wise classification is performed. A Fuly Convolutional Network (FCN) will be trained to label each pixel in the given picture as a road or not-road. The FCN includes encoder and decoder sub networks. The encoder network is based on the FCN-8 architecture developed at UC-Berkeley. The encoder for FCN-8 is the VGG16 model pretrained on ImageNet for classification. The fully-connected layers are replaced by 1-by-1 convolutions. The decoder portion of FCN-8 is constructed such that the input is upsampled to the original image size.
|Path Planning for Autonomous Driving
In this project, the goal is to safely navigate around a virtual highway with other traffic that is driving +-10 MPH of the 50 MPH speed limit. The car's localization and sensor fusion data is provided, there is also a sparse map list of waypoints around the highway. The car should try to go as close as possible to the 50 MPH speed limit, which means passing slower traffic when possible, note that other cars will try to change lanes too. The car should avoid hitting other cars at all cost as well as driving inside of the marked road lanes at all times, unless going from one lane to another. The car should be able to make one complete loop around the 6946m highway. Since the car is trying to go 50 MPH, it should take a little over 5 minutes to complete 1 loop. Also the car should not experience total acceleration over 10 m/s^2 and jerk that is greater than 10 m/s^3.
|Model Predictive Control
In this project, we implement Model Predictive Control to drive a car autonomously around a track. As the car drives around the track, we will be given a stream of waypoints and the car's position (in the world coordinate system), its throttle, speed, steering angle, and direction. The task in this project is to first model the car's motion using the Kinematic Bicycle model. Then using this model, build and tune an MPC controller to optimally throttle and steer the car such that it follows the track safely and with maximum comfort to its riders. In addition, the car's model and its controller must be efficiently implemented to ensure real time execution. We will need to calculate the cross-track error and account for a 100 millisecond latency between actuation commands on top of the connection latency.
|PID Controller for Steering and Acceleration
The goal of this project is to control a vehicle driving on a road by keeping it as close to the center of the road as possible. We utilize the PID controller to achieve this goal. We use a simulator of a car driving on a road and real-time stream of its speed, cross-track error (CTE), and steering angle. The simulator allows the user to control the car using its throttle and steering angle values, which will be provided by our PID controllers.
|Lane Line Detection
Detecting lane lines on roads is very important for autonomous driving from SAE Level 1 through Level 5. I use the computer vision library OpenCV in this project to perform: camera calibration, color and gradient threshold, and birds eye view transformation. I then I used the histogram and sliding windows method to identify left and right line pixel positions. This is followed by curvature and lane center calculation.
|Behavioral Cloning/Imitation Learning
The objective of the Behavioral Cloning Project is to teach a car to drive by itself in a simulator by cloning the actions that you, as a human, execute in order to steer the car. At a very high level, we will generate a sample data consisting of images and the corresponding steering angles provided in to train a model to steer the car around the track.
|Traffic Sign Classification
In this project, the task is to build a CNN classifier to recognize traffic signs. The model will be trained so it can decode traffic signs from natural images by using the German Traffic Sign Dataset.
|Robot Localization using Particle Filters
Your robot has been kidnapped and transported to a new location! Luckily it has a map of this location, a (noisy) GPS estimate of its initial location, and lots of (noisy) sensor and control data. In this project, we implement a 2-dimensional particle filter in C++. The particle filter is given a map and some initial localization information (analogous to what a GPS would provide). At each time step, the filter will also get observation and control data.
|Vehicle State Estimation using Unscented Kalman Filter
In this project we utilize an Unscented Kalman Filter to estimate the state of a moving object of interest with noisy lidar and radar measurements.
A simulator provided generates noisy RADAR and LIDAR measurements of the position and velocity of an object, and the Unscented Kalman Filter must fuse these measurements to predict the position of the object. The communication between the simulator and the UKF is done using WebSocket. The project is implemented in C++.