You've found your unicorn! An applied math, statistics, computer science trifecta. I've spent the last twenty years working on all sorts of data and applied science problems, building frameworks that deliver cogent and actionable insights.
Before we dive in, a quick note on this website. It's designed to deliver an adaptive granularity experience; that is, you select the level of detail.
I reviewed the team's existing code and data pipelines and worked with the principal investigators to identify technical debt and stabilize infrastructure.
My technical work focused on improvements to the Stream ID product (community detection). More generally, I tried to socialize data science best practices and build a more data-driven culture.
I provided data science support to the Xbox Cloud Gaming team.
I developed novel statistical algorithms: identifying correlated events in log data; forecasting and alerting for resource-related metrics.
I evangelized in-house, A/B testing for partner teams across Microsoft.
I investigated novel statistical and ML models for classifying customer support issues and provided general statistical support to Office 365 business partners.
I provided client-facing statistical support and data science expertise across a variety of problem domains.
I identified valuations of poor quality and applied post hoc corrections. I also worked to identify algorithmic instabilities; built prototypes featuring regularized, interpretable models with spatiotemporal priors; and suggested improvements to existing methodologies.
I built statistical models to improve up- and cross-selling of mobile add-on packages.
I continued to provide solutions for numerical stability issues arising in the multi-factor backward lattice algorithm.
I supported my PhD studies with teaching and research.
I worked on numerical codes for pricing exotic financial derivatives.
I was a graduate teaching assistant: college algebra, calculus, introductory statistics courses, and numerical linear algebra.
I implemented backscatter models and tracking algorithms for RADAR applications.
The team had adopted DVC as a data versioning tool but wasn't able to use it effectively. For example, it wasn't possible to have results simultaneously available for comparison across multiple runtime configurations.
Onboarding -- in particular, setting up a development machine -- had been painful. During the process, it quickly became evident that maintaining a consistently versioned compute environment hadn't been a concern. This was true despite a difficult, deprecation-related refactoring that had taken place prior to my arrival. One place where this solution had the potential to improve performance was during a git push and the subsequent git actions: the existing process required that all the R package dependencies be rebuilt, and it would routinely take fifteen minutes to check in code.
The source code had evolved without discipline. It had been almost a year since the last merge to main, and multiple files, sharing perturbations of replicated or discarded logic, proliferated in the codebase. The prevailing (anti)pattern was to source one variant or another, often haphazardly, into a cascade of R scripts.
The World Health Organization ("WHO") provides polio data through their POLIS endpoints. The availability of data is lagged, and the historical record is more complete than what would be available at the time of a forecast. The WHO uses a record update strategy rather than an append. Thus, records are not immutable. For backtesting purposes, this means that the enduser must take responsibility for maintaining an accurate historical record. Another difficulty with the POLIS dataset is that it is throttled, and data retrieval requires multiple calls to a finicky endpoint delivering records at a mere trickle, only 2000 per call.
In 2022, Conviva made an effort to extend their core business beyond streaming video. The goal was to provide instrumentation as a service. Now, any platform would be able to monitor user state by leveraging Conviva's reporting layer in their software stack. I provided data science support for this effort, including developing interactive tools for visualizing state transitions in arbitrary state spaces.
Reproducibility and data versioning became elevated concerns when Data Science was unable to verify the correctness of production metrics. I built a Scala/Databricks library that enabled caching of incremental results. This decomposed the monolithic production pipeline into smaller stages and allowed other data science users to collaborate from a consistent, shared starting point.
Conviva wanted to participate in an RFP but the rigidity of the existing, monolithic pipeline made it difficult. In particular, the project required exploratory data analysis, redesigning and generalizing the ingestion portion of the existing pipeline, and implementing a scalable, map-reduce variant of the Louvain community detection algorithm in Scala.
Historically, Conviva used third party data to assess the correctness of the household assignments generated by its Stream ID product. Due to missing data, this entailed a problematic matching problem. I reviewed the existing assessment metrics and offered improvements.
Conviva's Stream ID product is tasked with solving a community detection problem. However, the clustering context is non-standard. In particular, the graph used to induce the clustering has two distinct types of nodes: devices and ip addresses. Moreover, the labels associated with the underlying entities of interest are subject to change without notice. There was no ground truth in the production data, so I built a generative model that produced synthetic data.
In early 2020, xCloud was preparing for a GA launch. There was an interest in understanding how beta testers and early adopters were using the system.
Before the GA release, access to the xCloud platform was by invitation only. In all cases, participants were existing Xbox console users. For this cohort of active gamers, one question was if access to xCloud impacted their usage of other Xbox platforms. If so, an estimate of the effect size was also of interest.
ServiceNow wanted to monitor noisy network resource metrics and to do so without generating spurious alerts.
One of ServiceNow's larger customers wanted to know if we could analyze correlated event data and supplied us with a test dataset.
In my interactions with partner teams, computing simple summary metrics was routine. However, computing any statistics more complicated than means and variances was rarely attempted.
In the beginning of 2015, the data scientists on the Bing Experimentation team were loaned out to partner teams to help them prepare their product workflows for experimentation.
The vision was that Bing could help Microsoft product teams adopt a culture of controlled experimentation; that the process need not be reinvented but could be outsourced to an existing experimentation platform. We approached a handful of partner teams, offering our collective support and expertise. We asked only that they commit to running at least one experiment. Of course, first experiments are a lot of work, and it took months to moderize existing engineering workflows and cultivate positive momentum with the stakeholders. Unfortunately, on our side, the engineers' delivery timeline slipped. The self-service, programmatic access to the experimentation platform that had been promised wasn't going to be ready for another six months. We wanted to maintain the momentum that we'd developed with our partner teams, so a coworker and I built a bare bones web-service as a stopgap to buy our engineering team more time.
In late 2014, the Office 365 Customer Intelligence Team wanted to understand their growth trajectory but faced issues with low data quality.
In 2014, Office 365 had just launched, and the Office 365 Customer Intelligence Team needed data science support to help answer their business questions. Top priority: costs associated with customer support tickets appeared to be out of control.
In 2014, the focus of the Office 365 Customer Intelligence Team was triaging customer support tickets, specifically runaway costs.
In 2012, lending Club was a relatively new, and fastly growing, peer to peer lending platform. Using historical data provided by the company, our paper described a method for constructing optimial portfolios of Lending Club loans.
In 2011, Zillow published a proprietary home value index--the ZHVI--a then competitor to the Case Shiller home price index.
I maintain several Ubuntu systems and needed a simple bash script that would backup / mirror these machines. Google pointed me to rsync. This blog post describes what I did with it.
A gist, in python, that uses asyncio with named sockets and illustrates a fork and monitor pattern. It's used here for monitoring heartbeats but could easily be adapted for other process health metrics.
This is a short piece of code that spawns a child process that handles requests from a named socket. This could be useful for, say, monitoring heartbeats or other process health metrics.
To keep the parent process simple, there is no IPC: the filesystem is used for communication. In particular, the parent need only call a send_heartbeat
function at its convenience. The monitor is lazy: it only computes the time since the last heartbeat on request. When the parent terminates, the monitor does too.
This code sets up a named socket in the filespace enabling control from the shell. This has utility when debugging. For example,
echo "hello" | socat - UNIX-CLIENT:monitor.socket
echo -n "" | socat - UNIX-CLIENT:monitor.socket
socat - UNIX-CLIENT:monitor.socket
asyncio.open_unix_connection
can be a bit fussy with being handed a socket. In particular, it expects an already accepted socket on which it could block indefinitely if, say, the client connects and does nothing. So, we provide a safe_unix_connection
async context manager to make sure it doesn't get stuck, and that the writer is closed appropriately when a connection is terminated.
Application logic is loosely encapsulated at the end of the script. It should feel similar to the callback function passed to asyncio.start_server.
This post follows Golub and Van Loan, introducing Householder reflections and Givens rotations, then using these tools to sketch out implementations of QR, Hessenberg, and Schur decompositions.
The post describes a homogeneous Poisson process using a Gamma conjugate prior that can be used to estimate a pooled, per-subject intensity given a collection of realizations.
A homogeneous Poisson process is the simplest way to describe events that arrive in time. Here, we are interested in a collection of realizations. An example is user transactions in a system. Over time, we expect each user to produce a sequence of transaction events, and we would like to characterize the rate of these events on a per-user basis. In particular, users with more data should expect a more personalized characterization. Statistically, this can be accomplished using a Bayesian framework.
A derivation of the density functions and likelihood expression associated with doubly and randomly censored data.
Censored data is an artifact of partial or incomplete measurements.
A typical scenario would be a survival analysis of time to event data. For example, a study may end before a final measurement is available (right censoring). Another situation might occur when batch processing log file data: the reported timestamp might reflect the time of processing and not the true event time (left censoring).
This post derives the density equations for censored data. Given a parameterization θ, this leads naturally to a log likelihood formulation. As the censoring mechanism is, in general, random, we further allow for the possibility that this too depends on θ.
I needed to merge the glyphs in two TrueType font files. FontForge, in particular its python extension, was the tool for the job.
This post elucidates the connection between the generalized inverse, the cdf, the quantile function, and the uniform distribution.
The probability integral transform is a fundamental concept in statistics that connects the cumulative distribution function, the quantile function, and the uniform distribution. We motivate the need for a generalized inverse of the CDF and prove the result in this context.
This post describes and implements an adaptive rejection sampler for log-concave densities.
Adaptive rejection sampling is a statistical algorithm for generating samples from a univariate, log-concave density. Because of the adaptive nature of the algorithm, rejection rates are often very low. The exposition of this algorithm follows the example given in Davison’s 2008 text, “Statistical Models.”
This post shows how to augment the Namecheap ddclient script to support multiple hosts on a dynamic IP.
In 2015, I went looking for a solution to the following problem: I have a single Linux server with a dynamically assigned IP address and I want to host several sites on this server. My registrar is Namecheap.com, and their advice is to use a Linux tool called ddclient.
Unfortunately, the example available from Namecheap doesn't cover multiple hosts. A Google search pointed me to thornelabs.net, where the author describes a patch that can be applied to ddclient. Ddclient is written in Perl, so patching is a possibility, but one that feels a bit unsatisfactory.
This paper constructs a model for shared resource utilization, determines stochastic bounds for resource exhaustion, and simulates results.
A friend at a large, Seattle-area company recently approached me with the following problem. Suppose we wanted to oversubscribe the shared resources that we lease to our customers. We've noticed that loads are often quite low. In fact, loads are so low that there must be a way to allocate at least some of that unused capacity without generating too much risk of resource exhaustion. If we could manage to do this, we could provide service to more people at a cheaper cost! Sure, they might get dropped service on rare occasions, but anyone that wasn't satisfied with a soft guarantee could still pay a premium and have the full dedicated resource slice to which they may have become accustomed. This seemed like a tractable problem.
Here, we propose a mathematical framework for solving a very simple version of the problem described above. It provides intuitive tuning parameters that allow for business level calibration of risks and the corresponding reliability of the accompanying service guarantees.
After developing the mathematical framework, we put it to work in a simulation context of customer usage behavior. In this experiment, most customers use only a fraction of the resource purchased, but there is a non-negligible group of “power” users that consume almost all of what they request. The results are rather striking. Compared to the dedicated slice paradigm, resource utilization in the oversubscribed case increases by a factor of 2.5, and more than twice as many customers can be served by the same, original resource pool.
The methodology is easily extended to the non-IID case by standard modifications to the sampling scheme. Moreover, even better performance will be likely if a customer segmentation scheme is incorporated into the underlying stochastic assignment problem.