You've found your unicorn! An applied math, statistics, computer science trifecta. I've spent the last twenty years working on all sorts of data and applied science problems, building frameworks that deliver cogent and actionable insights.
Before we dive in, a quick note on this website. It's designed to deliver an adaptive granularity experience; that is, you select the level of detail.
I reviewed the team's existing code and data pipelines and worked with the principal investigators to identify technical debt and stabilize infrastructure.
My technical work focused on improvements to the Stream ID product (community detection). More generally, I tried to socialize data science best practices and build a more data-driven culture.
I developed novel statistical algorithms: identifying correlated events in log data; forecasting and alerting for resource-related metrics.
I evangelized in-house, A/B testing for partner teams across Microsoft.
I was on a team of about a dozen PhDs, mostly statisticians and computer scientists. We evangelized experimentation, motivating partner teams to adopt experimentation as part of their normal release cycle. This involved collaborating with product managers to assess usage metrics and combine these KPIs into an overall evaluation criterion. We identified product features that might be good candidates for running first experiments, and we worked with the feature engineering team to ensure correct instrumentation was in place, that data was being collected, and that the quality of the data was of sufficiently high quality. Then we onboarded them into the Bing Experimentation engine: experimentation as a service.
I investigated novel statistical and ML models for classifying customer support issues and provided general statistical support to Office 365 business partners.
I joined Microsoft as a Researcher during a major restructuring. They were phasing out Test Developer positions and introducing Data Science roles in their stead. Managers didn't necessarily know how to leverage these new skillsets, and I wound up in a data science / business analyst role.
I provided client-facing statistical support and data science expertise across a variety of problem domains.
My approach: I offer a hands-on, statistical best-practices approach to data science. This includes expertise in building and hardening data pipelines, designing statistical experiments, and delivering appropriate, interpretable data analyses. I have held senior data scientist roles at ServiceNow and Microsoft and, previously, was a senior software developer at Numerix. My general knowledge of statistical modeling and machine learning is broad and extensive. I have deeper expertise in
I identified valuations of poor quality and applied post hoc corrections. I also worked to identify algorithmic instabilities; built prototypes featuring regularized, interpretable models with spatiotemporal priors; and suggested improvements to existing methodologies.
I built statistical models to improve up- and cross-selling of mobile add-on packages.
I continued to provide solutions for numerical stability issues arising in the multi-factor backward lattice algorithm.
I supported my PhD studies with teaching and research.
I worked on numerical codes for pricing exotic financial derivatives.
I was a graduate teaching assistant: college algebra, calculus, introductory statistics courses, and numerical linear algebra.
I implemented backscatter models and tracking algorithms for RADAR applications.
I maintain several Ubuntu systems and needed a simple bash script that would backup / mirror these machines. Google pointed me to rsync. This blog post describes what I did with it.
A gist, in python, that uses asyncio with named sockets and illustrates a fork and monitor pattern. It's used here for monitoring heartbeats but could easily be adapted for other process health metrics.
This post follows Golub and Van Loan, introducing Householder reflections and Givens rotations, then using these tools to sketch out implementations of QR, Hessenberg, and Schur decompositions.
The post describes a homogeneous Poisson process using a Gamma conjugate prior that can be used to estimate a pooled, per-subject intensity given a collection of realizations.
A derivation of the density functions and likelihood expression associated with doubly and randomly censored data.
I needed to merge the glyphs in two TrueType font files. FontForge, in particular its python extension, was the tool for the job.
This post elucidates the connection between the generalized inverse, the cdf, the quantile function, and the uniform distribution.
This post describes and implements an adaptive rejection sampler for log-concave densities.
This post shows how to augment the Namecheap ddclient script to support multiple hosts on a dynamic IP.
This paper constructs a model for shared resource utilization, determines stochastic bounds for resource exhaustion, and simulates results.