My Background

I grew up in Potomac, Maryland and attended the California Institute of Technology (Caltech), where I graduated in June, 2016 with a B.S. degree in Computer Science with honors. I attended Carnegie Mellon University and graduated with a Masters degree in Computer Science/Robotics in August 2018. I am currently employed as a Robotics Engineer at Carnegie Mellon's National Robotics Engineering Center (NREC) in Pittsburgh, PA. 

My Interests

Artificial Intelligence

Machine Learning

Robotics

Computer Vision

Reinforcement Learning

Finance/Economics

Contact Me

Email:  chazard@andrew.cmu.edu

Cell phone: 301-803-9939

My Resume (pdf)

Download

Current/Ongoing Projects

Visual Question Answering with Common Sense Knowledge and Reasoning

This is a project I have been working on that has gone through multiple iterations. Originally inspired by Ask Me Anything: (https://arxiv.org/abs/1511.06973 ), the idea of the project is to incorporate knowledge from an external knowledge base into a visual question answering process. Most VQA approaches focus on the purely visual aspect of the task, with little focus on incorporating common sense knowledge ("human is an animal", "a bus is for transportation", etc.) into reasoning tasks to build a truly multi-modal system.  Training a machine to think properly about these types of questions is difficult because datasets are noisy, the correct reasoning process is unknown a priori, and most importantly, the best way to represent knowledge and fuse it is still undetermined.


My goal is to develop a human interpretable deep learning model that can perform first order logic based reasoning as well as utilize common knowledge to answer questions about an image. To address this I have gone through several approaches, including compositional models (e.g. neural module networks:  https://arxiv.org/abs/1511.02799 ) with additional query options for information retrieval, combining a neural turing machine ( https://arxiv.org/abs/1410.5401 ) with an iterative querier based on a scene graph (inspired by http://vision.stanford.edu/pdf/zhu2017cvpr.pdf ) and a gated graph neural network (https://arxiv.org/abs/1511.05493 ).


My most recent approach has been to work with scene graphs extracted from an image to reason about objects on a more abstract level, then train a neural network model to label the graph nodes with encodings to mimic assignment of logical symbols, followed by gated graph neural network propagation. Using this type of model as a backbone, another module is tasked with learning to retrieve database information with regards to nodes it iteratively selects to build a more complete knowledge graph based on the question.


In collaboration with Jean Oh (http://www.cs.cmu.edu/~./jeanoh/ )


Relevent datasets:

visual genome:  https://visualgenome.org/static/paper/Visual_Genome.pdf 

visual 7w:  https://arxiv.org/abs/1511.03416 

CLEVR dataset for semantic reasoning: CLEVR_A_Diagnostic_CVPR_2017_paper.pdf 


Multi-agent reinforcement learning to simulate basketball games

The idea of this project is to simulate a game of basketball by learning team strategies through self play in an environment that mimics the game of basketball at a high level. The individual players are represented as dots on a 2d-grid resembling a court and are tasked with learning controllers that tell the players where to move according to realistic yet simple physics constraints. Outcomes of player interactions such as stealing, pass success rates, etc. are determined by regressions on average nba statistics for these events to make a simple simulation of the game. 


Using this simulation framework, we learn multi-agent strategies for coordinated movement via a social attention network backbone ( https://arxiv.org/abs/1710.04689 ) using PPO (proximal policy optimization) with reward shaping to encourage legal play. 


In the future, I plan to customize the team strategy according to the individual players' skills (represented as coefficients in play outcome event regressions based on that player's official nba record) using a meta-learning framework such as model agnostic meta learning (MAML: https://arxiv.org/pdf/1703.03400.pdf) on top of the baseline model for a generic team. This sort of system could potentially be used to judge match ups between teams to determine fair betting odds.

Football (not soccer) state space simulation for predicting score spreads

In this project, I developed a state space simulation model for football games that uses stochastic transitions based on historical NFL data and taking into account the players involved in each in-game event.  To calculate accurate probability distributions for in game event outcomes like "probability of kicking a field goal" or "number of yards in a passing play", I use Bayesian regressions with features drawn from the game state and priors drawn over the historical play by play NFL records of the players. Player skills are represented as coefficients in these regression models (represented as Bayes nets), which we can then use to calculate explicit posterior distributions over play outcomes.


Via dynamic programming, I calculate the value of each game state (including yard line, down, time on clock, etc.), and use that to calculate Nash equilibria for calling plays (e.g. whether the team should run, pass, kick, or punt) to simulate high level coaching decisions. With this framework, we can repeatedly simulate entire games to estimate the distribution of score spreads and compare them to posted game odds.  


This book inspired my interest in the topic: Wayne Winston "Mathletics"  (https://www.amazon.com/Mathletics)


Past Projects

Automated Design of Robotic Hands

This is the main project from my masters research (see my publications below): the idea behind this was to build a system capable of automating the design of special purpose manipulators for dexterous manipulation tasks. Specialized manipulators are easier to control, are more robust than generic manipulators to the given task, and are cheaper to make. The system I created was capable of taking a high level goal specification for a manipulation and translating that into a working 3d printable design for that manipulation without any human design.  


The eventual goal of this research is to cover the space of dexterous manipulations well enough to build a hand for any given manipulation task. From there we can build hands capable of multiple related tasks and so on until we arrive at a general purpose manipulator that is capable of all manipulations.


Advised by Nancy Pollard http://graphics.cs.cmu.edu/nsp/index.html and Stelian Coros http://crl.ethz.ch/coros.html 

Market Micro-structure Models for Composition of Informed vs Uninformed Traders

In this project, I took historical stock market trading data (prices and volumes) and estimate the composition of informed vs uninformed traders via a Kyle-type market micro-structure environments (kyle1985.pdf). From here I built my own model with extrapolative traders, taking inspiration from Lawrence Jin's X-CAPM model (jin-xcapm.pdf), along with a MLE method for estimating model coefficients to estimate factors like trend-ability of an asset.


Side note: One day I plan on revisiting these topics in finance to successfully fuse market micro-structure models with trend based trading strategies to develop my own trading machine. In doing so, I am going to approach this from the guise of automated machine learning to make a machine that continually develops its own trading strategies.


Advised by Ben Gillen: https://www.bengillen.com/ 

Learning Market Timing Models for Stock Market Trading

I developed a machine learning pipeline that clustered stocks into groups based on price correlation and used common stock market technical indicators to train indicators to predict if a given asset is currently in a trend state (that is, a consistent and protracted significant price move during a trading period).  Using trend indicators built over different time horizons as well as estimates of their uncertainty, I constructed stock market trading strategies via a graph based genetic programming optimization that would optimize a set of complexity regularized rules for trading, including when to enter a trade, when to take losses or profits, and how much to invest.


 Advised by Ben Gillen: https://www.bengillen.com/  

Robot Boxing Dummy

In undergrad, I built a 3D printed robot arm based on the InMoov humanoid robot  (http://inmoov.fr/) to attach to a BOB punching dummy (BOB) that was equipped with a pair of stereo cameras placed in its head to create a vision-enabled prototype boxing robot that would block the user's punches. By tracking the user's boxing glove, it could label the user's punches based on the angle of the punch, then use that information to predict the user's future punch sequence and try to block them.

My Publications

"Automated Design of Robotic Hands for In-Hand Manipulation Tasks"

Christopher Hazard, Stelian Coros, Nancy Pollard


Published in the International Journal of Humanoid Robotics (IJHR) 

December 2019

"Automated Design of Simple and Robust Manipulators for Dexterous In-Hand Manipulation Tasks using Evolutionary Strategies"

Andre Meixner, Christopher Hazard, Nancy Pollard


Presented at the 2019 IEEE - RAS International Conference on Humanoid Robotics

October 2019

"Automated Design of Manipulators for In-Hand Tasks"

Christopher Hazard, Nancy Pollard, and Stelian Coros


Presented at the 2018 IEEE - RAS International Conference on Humanoid Robotics

November 2018

Received  Award for Best Oral Paper Finalist (top 5 paper in conference)

Masters Thesis

Automated Design of Manipulators for In-Hand Tasks

by: Christopher Hazard

Carnegie Mellon University