I'm 26, currently living in the Bay Area, and have a background in electrical engineering and software development.
I currently hold a B.S. in Electrical Engineering from UC San Diego and studied semiconductors, hardware development, and condensed matter physics. For the past couple of years, I have worked in the software industry and have held positions such as Data Engineer and Cloud Engineer.
The positions I've held coupled with my educational background have led me to develop a variety of technical skills. Whether it was designing the data pipeline or automating the development and deployment of cloud infrastructure, I find great enjoyment in my work.
Outside of work, I take on a host of projects as well! Whether it is to learn something new or better reinforce the skills I've learned at work, I am always building something.
At Moogsoft, I leverage my background in automation and Linux system administration to deploy software solutions in the cloud. Using a combination of tools such as Terraform and Ansible, I deploy, configure, and manage computing infrastructure to run Moogsoft's core product, AiOps.
In addition to managing the current infrastructure, I work on ways to make the products more cloud-native. From leveraging tools such as Docker and Kubernetes to integrating serverless products on the cloud, I actively look for ways to improve reliability and scalability, reduce costs, and employ security best practices.
Further, as a member of the team providing the last line of support for the product, I employ industry-leading DevOps and SRE practices to ensure high availability and redundancy for the end-user. Whether it's responding to a SEV1 outage or trying to prevent the next one from occurring, I integrate the best-in-class monitoring and automation solutions into production environments.
At EpiBiome, the core component of my job was working with research data. From building dashboards, developing a data warehouse, or building tools for data capture, almost every task revolved around data.
One notable accomplishment while at EpiBiome was developing the company's cloud-based data warehouse. I consolidated data from multiple relational and NoSQL databases, websites, and the company's Laboratory Information Management System (LIMS). Once built, the data lake captured research and process data generated from the lab as well as customer and sales information from the company's customer-facing sequencing service.
In addition to working on the data warehouse, I developed several Python and JavaScript-based tools for data input and validation, inventory and asset management, and internal dashboards and data visualizations. These tools vary from a simple Python script to a full stack website deployed using a python web framework. Ultimately, these tools coupled with the data warehouse help lead to scientific discoveries as well as serve as a tool to show current and potential investors the advancements taking place within the company.
During this extended internship, I split my time between R&D and Applications. I worked primarily on advanced measurement techniques for characterizing the magnetic, thermal, and electrical properties of materials. I also designed electrical and mechanical components for use in cryogenic measurement platforms and advanced the automation of the measurement platforms through custom programming and scripting.
During my undergraduate studies, I worked in a physics laboratory conducting research in superconducting electronics, multiferrorics, and nanofabrication. I worked on projects in materials characterization, device fabrication, cryogenic filtering, and electrical device measurements. My roles consisted of developing a cryogenic probe for measuring superconducting thin films, assisting in thin film growth, and bringing online an epitaxial growth chamber.
In this role, I assisted in the design of infrastructure for the State Water Project. I worked primarily developing engineering drawings and specifications for high-voltage facilities and communication systems.
Working for a biotech company dependent on research data generated by the scientists, it became vital to have a centralized data store. By consolidating relational and NoSQL databases into one location, the scientists could then analyze results and perform analytics on the data. Before long, this data warehouse made its way into every major lab process leading to greater discoveries and as a result, a higher valuation for the company as a whole.
Despite holding a smaller share of the cloud market, Google's Cloud Platform (GCP) is giving AWS a run for their money. From their integrated machine learning APIs to managed Kubernetes service, Google’s Platform excelled in the areas that mattered most. At EpiBiome, I was asked to lead the migration from AWS to Google Cloud and in only 3 weeks, all essential services had been transitioned over.
This is really multiple projects combined into one, but hey, it's all kinda getting at the same point: I made some websites. Two worth mentioning are:
To interface with our data warehouse, we needed a tool that was quick to build but customizable enough to suit our needs. Google App Maker is just the tool. I built over 10 web apps to view/record/analyze data, interact with our inventory system, receive and log incoming packages, and more.
While at EpiBiome, I helped with the development and deployment of our tools for processing sequencing data. The tools were developed for deployment across cloud hosted compute instances and interfaced with cloud buckets/storage. The bioinformatics projects were a lot of fun because we got to develop with larger compute infrastructure and work with more advanced solutions/tools such as Docker and Kubernetes.
Who likes to rename hundreds of items manually? I know I definitely do not! Well if you don't like to either, I've got just the thing. I created a script to rename a whole bunch of files at once. Just make a map of the old and new names, run the script, and viola all your files are renamed.
Are you tired of validating user input for those stop-gap scripts that accept command line input? Have you used other libraries to accomplish just this to find they don't work in Jupyter Notebooks? Well I have just the fix. See that link to the git repo? Okay great, follow that and you can find a variety of functions for user input and sanitization.
My family asked me for a tool to receive alerts if there was a lack of motion detected by a specific Nest Camera. I know right, 99% of people want to know when there is motion, not when there is not. The reason though was because my Grandma lives alone and the family was worried about the possibility of her falling down. Thus, if too much time went by without motion, the script triggers an email alert to a group of family members.
Also, Nest, if y'all are reading this, I'd love it if you released an API SDK.
I wanted to make a clear and concise guide to deploying JupyterHub on Google's Cloud Platform (GCP). The guide walks through provisioning a server, installing all dependencies, and acquiring web certificates to encrypt web traffic (HTTPS/SSL). It's pretty cool! If I were you, I'd check it out...
I enjoy building everything from bicycles to computers. I can even take apart a chainsaw or wire the electrical in your house. Over the years, I have picked up an odd assortment of handyman skills such as welding, woodworking, machining, and plumbing. Heck, I've even lived on a farm in Hawaii... twice!
I always have several projects going on outside of work. Luckily for my parent's sanity, I've moved away from go-karts and towards electronics. I have my own 3D printer, love to program Arduino's and Raspberry Pi's, and do a bit of web development and python coding as well.
I'm not much for TV or video games, but I do enjoy the occasional flight simulator program or rocket launch on Kerbal Space Program. Who knew learning orbital mechanics could be so fun!?