National Biomechanics Day
introduction to computer vision
Every year, SRALab hosts local high school students and provides bite sized summaries of the different biomechanics research projects going on in the organization. I’ve previously helped some students from CBM present on myoelectric control for prosthetic devices, but this year decided to show the students something more related to my work.
Way back at WashU, my friend Taylor and I created a fun tool to put sunglasses on any picture. I thought that since this type of computer vision permeates the world around us now (Face ID, Snapchat filters, and Face tracking), it would be cool to introduce students to how these models are trained, highlight their utility for our biomechanics research, and finally some limitations for people with disabilities.
As an important part of training machine learning models is the data, I created a tool for students to annotate the location of a person’s eyes on images (some of their teachers). In this way they could really understand what is needed to create these models.
I stored all their annotations in a database and in real time trained a CNN to take an image and output the coordinates of the eyes. Through the training process, they were able to see how the model started out performing quite poorly at the task, but after feedback (from gradient descent) improved.



You can find my presentation here. Here is the repository for the data annotation tool using Streamlit, the database to store the annotations using Datajoint, and blazing fast CNN training using Jax and Equinox.