You've found your unicorn! An applied math, statistics, computer science trifecta. I've spent the last twenty years working on all sorts of data and applied science problems, building frameworks that deliver cogent and actionable insights.
Before we dive in, a quick note on this website. It's designed to deliver an adaptive granularity experience; that is, you select the level of detail.
I reviewed the team's existing code and data pipelines and worked with the principal investigators to identify technical debt and stabilize infrastructure.
My technical work focused on improvements to the Stream ID product (community detection). More generally, I tried to socialize data science best practices and build a more data-driven culture.
I provided data science support to the Xbox Cloud Gaming team.
I developed novel statistical algorithms: identifying correlated events in log data; forecasting and alerting for resource-related metrics.
I evangelized in-house, A/B testing for partner teams across Microsoft.
I investigated novel statistical and ML models for classifying customer support issues and provided general statistical support to Office 365 business partners.
I provided client-facing statistical support and data science expertise across a variety of problem domains.
I identified valuations of poor quality and applied post hoc corrections. I also worked to identify algorithmic instabilities; built prototypes featuring regularized, interpretable models with spatiotemporal priors; and suggested improvements to existing methodologies.
I built statistical models to improve up- and cross-selling of mobile add-on packages.
I continued to provide solutions for numerical stability issues arising in the multi-factor backward lattice algorithm.
I supported my PhD studies with teaching and research.
I worked on numerical codes for pricing exotic financial derivatives.
I was a graduate teaching assistant: college algebra, calculus, introductory statistics courses, and numerical linear algebra.
I implemented backscatter models and tracking algorithms for RADAR applications.
I set up a Ghost blog / portfolio for featured articles that I've written. It runs as a systemd service in a Docker container, and there's "one-click" tooling to publish from both jupyter and markdown formats.
Metayer is an R package that addresses a few of the common pain points associated with evolving an R script / one-off analysis into a proper, productionalized, well documented, data science deliverable.
I built an alternative to DVC that leveraged upstream configurations and abstracted access to incremental, upstream results. This provided a machinery to write cleaner, stage-focused, client code.
I built an R package server (an artifactory) so we could fix development environments and replicate them across the team. This served precompiled binaries, so it also provided efficiencies over regularly recompiling source libraries in CI/CD.
I built a framework for encapsulating directory-organized code and used chained R environments to provide module-level polymorphism.
I built a robust data ingestion tool for tables available through the POLIS API.
I built a web visualization tool--a circular Sankey diagram--to drive a discussion with Product about the benefits of leveraging customer domain knowledge.
I built a reproducible research framework that cached incremental results in an effort to improve data science collaboration and reduce compute costs.
I spearheaded the technical work that extended the Stream ID product. This was showcased in an RFP that would have otherwise been outside the scope of our existing product.
I improved a family of existing metrics, and introduced some new ones, used to compare Stream ID households against a third party dataset.
I built an event-based model to simulate ground truth for the household identification problem.
I reviewed the existing estimates for xCloud GA resource requirements, built a model that suggested they were too high, and made recommendations for significant reductions.
I did a retrospective analysis to determine whether access to the xCloud platform into a user's choice set changed existing behavior with respect to Xbox console.
A collection of posts that fall under the umbrella of textbook annotations.
I have intermittently worked on some small scale, python utility projects.
I built a publishing pipeline / platform to host my CV and portfolio.
I imported non-profit IRS tax returns into ElasticSearch and built a website to search for local charities.
I developed a python package that implemented a multivariate Kalman Filter.
The backend monitors the Twitter stream and maintains a dynamic list of trending hashtags; and, for each hashtag, a random sample of relevant tweets. The front end shows the world what's currently popular on Twitter.
I used a missing data property of Kalman filters to kick off a noise-or-not detector that enabled more sensible alerting in erratic long-tail scenarios.
I developed a statistical algorithm to surface event pairings that had highly correlated arrival times. I tested the algorithm on simulated data from a generative model based on branching processes.
I developed a generic, big data summary tool for columnar data.
I developed overall evaluation criteria and helped orchestrate first experiments with Bing partner teams.
I built a stopgap webtool to ensure that partner teams would be able to launch their first experiments without delay.
I was asked to build a forecast model with insufficient data and used it as a teachable moment to drive changes in how the organization managed and communicated changes in their data pipelines.
I introduced basic statistical ideas to business leaders, and this helped reduce managerial randomization across the org.
I proposed a robust analysis plan for characterizing support tickets and subsequently scaled it back to accommodate a changing timeline.
I built an app to track location and collect daily commute data with the intention of helping people find a regular carpool.
I designed and built the Inferentialist website.
Using Lending Club data, I built ML optimized portfolios and showed improved performance relative to portfolios based on predetermined loan grades.
I developed a cross-validated, coefficient-of-variation metric to assess the risk of temporal instability in a home's Zestimate history. This indicated that Zestimates with non-physical behavior were far more prevalent than previously thought.
The regional and subregional ZHVI performed poorly due to small samples. I developed a performant alternative that estimated regularized discount curves from longitudinal, repeat sale data.
I was tasked to identify and adjust "spikey" Zestimate behavior in a collection of 100 million Zestimate histories. This resulted in post hoc corrections to nearly 4 million time-series.
I provided statistical support to the implementation team.
I developed a multithreaded code to propagate probability vectors through a phylogenetic tree. This allowed our research team to make inferences on branch length and, consequently, to develop timelines for genome divergence.
I implemented a PDE solver to price Asian and Lookback options with discrete observation dates.
I reverse engineered a multi-factor, backward-lattice pricing algorithm in order to diagnose and fix numerical instabilities.
I developed new non-linear optimization solvers for calibrating BGM Libor interest-rate models to market data.
I maintain several Ubuntu systems and needed a simple bash script that would backup / mirror these machines. Google pointed me to rsync. This blog post describes what I did with it.
A gist, in python, that uses asyncio with named sockets and illustrates a fork and monitor pattern. It's used here for monitoring heartbeats but could easily be adapted for other process health metrics.
This post follows Golub and Van Loan, introducing Householder reflections and Givens rotations, then using these tools to sketch out implementations of QR, Hessenberg, and Schur decompositions.
The post describes a homogeneous Poisson process using a Gamma conjugate prior that can be used to estimate a pooled, per-subject intensity given a collection of realizations.
A derivation of the density functions and likelihood expression associated with doubly and randomly censored data.
I needed to merge the glyphs in two TrueType font files. FontForge, in particular its python extension, was the tool for the job.
This post elucidates the connection between the generalized inverse, the cdf, the quantile function, and the uniform distribution.
This post describes and implements an adaptive rejection sampler for log-concave densities.
This post shows how to augment the Namecheap ddclient script to support multiple hosts on a dynamic IP.
This paper constructs a model for shared resource utilization, determines stochastic bounds for resource exhaustion, and simulates results.