I am a classically trained statistician with a background in consulting and computational software development. My value derives from framing business questions in a mathematical context, carrying out appropriate analyses, and delivering interpretable results based on data.
I have worked on projects using R, SQL, C/C++, bash: awk/grep/sed, Rails, Python, Java, and C#/SCOPE (Microsoft's version of MapReduce).
In December 2016, I joined the Operational Intelligence team where I focused on developing and testing algorithms for event correlation and anomaly detection in an ITOM environment.
I left because, despite promises to the contrary, we never had any data.
In May 2012, I started Inferentialist LLC, with the belief that the tech industry was missing opportunities to leverage statistical best practices.
In October 2014, I was transitioned to a data reporting role when the O365 Customer Intelligence Team was re-organized under new management. By January, the CI team had effectively collapsed, and I found myself on the Bing Analysis and Experimentation Team. My new role was as an internal consultant, to provide analytics support to external partner teams, across the company, that had expressed an interest in onboarding to Bing's existing experimentation platform.
In my 18 months at Microsoft, I had seven different managers.
In May 2014, I accepted a research position at Microsoft on the O365 Customer Intelligence Team. Our mandate was to develop machine learning tools that would detect trends in customer service tickets; the goal was to identify common customer complaints and, in an automated fashion, propose relevant solutions.
I was hired to work on, and improve, the Zestimate algorithm. I argued against the existing, off-the-shelf machine-learning approach and in favor of building an interpretable model with spatial and temporal correlation structures.
I joined a team at Globys that was tasked with improving upsell stratgies for mobile add-on packages. The initial goal was to derive predictors for upsell from existing, retrospective data. While I was there, I saw the strategy shift toward controlled experiments, the gold standard in assessing the efficacy of online marketing campaigns.
I continued a summer internship, set up by my academic advisor while on sabbatical, with NumeriX. I focused on two sides of a multi-factor SDE, derivatives pricing model: calibration to market prices and numerical pricing algorithms.
My PhD-level coursework was in statistical theory, optimization, stochastic modeling, and computing.
My advisors were Doug Martin, computational finance; Paul Tseng, semi-definite programming; Vladimir Minin, statistical genetics.
I received a Masters degree from the department in 2012.
The Computational Finance Certificate is an interdisciplinary program that requires several finance courses as well as a "capstone" project. In my case, I took courses in optimization, econometric theory, stochastic calculus, modern portfolio theory, and financial derivatives.
My PhD-level coursework was in numerical analysis for ordinary and partial differential equations with applications in computation fluid dynamics.
My advisor was Randall Leveque.
I received a Masters degree from the department in 2005.
My undergraduate coursework focused on theory and algorithms. I also completed the undergraduate certificate program in applied and computational mathematics.
My advisors were Bernard Chazelle and Brian Kernighan.
I received my Bachelor of Science in Engineering from the university in 2001, graduating magna cum laude.