Welcome! I'm a fourth-year Economics PhD student at Northwestern University. You can find me in the Kellogg Global Hub room 3368. My email is mcamara@u.northwestern.edu. I'm also on twitter.

My current work is in mechanism design, learning in games, and decision theory. I split my time between our economic theory and theoretical computer science communities, including Jason Hartline's Online Markets Lab.

Dissertation Research

Economists predict behavior using a simple identifying assumption: that economic agents act in their own best interest. We tend to define "best interest" narrowly enough to derive useful conclusions from a given model, but broadly enough to include certain behaviors that we see empirically. Along the way, we acknowledge that many of our models assume a level of competence that we ourselves would find difficult to satisfy. But only rarely do we take advantage of the fact that statisticians and computer scientists have spent decades formalizing the ways in which decision problems are difficult, and characterizing the difficulty that state-of-the-art methods can handle.

So: can we leverage theories of complexity to weaken or strengthen our behavioral assumptions in a way that (a) makes our theory more credible, where needed, (b) justifies existing theory, where possible, and (c) modifies our predictions appropriately as a problem becomes more complicated? Can these revised models explain real-world phenomena that are difficult or impossible to express via existing models? My dissertation research answers these questions affirmatively.

In particular, my work uses a three-step methodology to revise classic models in mechanism design and decision theory. It is easy to describe in the abstract (although the execution can be more intricate). Given an existing model:

  1. Figure out what we're asking agents to do and how real-world experts would accomplish that task.
  2. Identify the performance benchmarks that real-world experts can guarantee (resp. that are impossible to achieve).
  3. Assume that agents weakly overperform (resp. strictly underperform) those benchmarks.


High-Dimensional Decision Theory

Life is a high-dimensional decision problem, but dimensionality has received little attention in our theory of choice. I introduce a model of high-dimensional choice under uncertainty where an agent may be asked to make a large number of decisions simultaneously. Here, optimization is challenging. What are the implications for choice behavior?

I prove two representation theorems, assuming well-known conjectures in computer science. If a choice correspondence $\phi$ is rational, monotone, and symmetric, then $\phi\in$ P iff the (revealed) utility function $u$ is additively separable. This can be understood as an axiomatic foundation for a heuristic known as narrow choice bracketing. If $\phi$ is rational and monotone, I give a necessary and sufficient condition for $\phi\in$ P/poly, pertaining to the pattern of pairwise violations of additive separability. These results allow me to evaluate rationality through an algorithmic lens. Say an agent's true objective $\bar{u}$ is intractable: should she nonetheless insist on acting rationally? If she cares about approximate optimality, the answer is no. For a natural class of objectives $\bar{u}$, I show that efficient irrational algorithms can substantially outperform efficient rational ones.

Learning Foundations for Mechanism Design (with Jason Hartline and Aleck Johnsen)

A rich class of mechanism design problems can be understood as incomplete information games between a principal who commits to a policy and an agent who responds. Traditionally, these models require strong and oft-impractical knowledge assumptions (the common prior). In this paper, we dispense with the common prior. Instead, we consider a repeated interaction where both the principal and the agent are learning over time from data. We reformulate mechanism design as a reinforcement learning problem and develop mechanisms that guarantee sublinear regret without any assumptions on the data-generating process. Our results require novel behavioral assumptions for the agent -- based on counterfactual internal regret -- that capture the spirit of rationality without imposing structure on the data-generating process.


I'll occasionally write about topics that I'm interested in and/or opinions that I'd like to express. See the links below.

Please feel free to reach out if anything interests you.