"Income and Public Service Demands: Comparative Voter Efficacy in Brazil"

Abstract
Many models of public service distribution in democracies predict the poor will have high voting leverage over distributive policy due to a numerical advantage in universal suffrage political competition. Empirical studies do not bear this prediction out, especially in highly unequal democracies, where canonical models predict the poor will have the greatest leverage. This paper proposes an argument that explains the apparently low weight placed by politicians on the preferences of the poor with respect to public service policy in unequal democracies. I show that even when accountability mechanisms function properly in democracy, the poor may find themselves at an electoral disadvantage. This occurs when the poor's (likely higher) public service demands are divided more symmetrically across competing services. When the better-off pile the weight of their votes on fewer services, their votes are more responsive to a unit shift in spending, even if their total service demands are lower. This leads the spending priorities of vote-maximizing, tactically-spending politicians to more closely reflect the preferences of the more concentrated demands of the better-off than of those with higher total state dependence for services. I illustrate the argument and its implications using a study of local public health service allocation in Brazil in the context of a shock to the public primary care service dependency-level of a subset of poor voters induced by a federal transfer program. I contrast voter demands for services with vote responsiveness to service spending using original survey data I collected in Brazil in the two weeks prior the 2012 municipal elections. This research updates our understanding of accountability in unequal democracies, suggesting that the poor do not necessarily fail to hold democratic politicians accountable as many theories would suggest; rather, democratic politicians may have the incentive to prioritize the preference-ranking of the less state-dependent over those more dependent on the public services in question if the less-dependent also have less diffuse service preferences.


"The Spillover Effects of Excludable Cash Transfers: Costs of the Miracle Cure for Development Woes"

Abstract
Brazil's conditional cash transfer program is often lauded by the development community for its potential to produce improvements in human capital. However, researchers have found the anticipated health benefits of the program to be somewhat elusive. I use individual-level data on child mortality to investigate ambiguous findings regarding the role the program plays in health outcomes of the poor. I show that while mortality outcomes of children of recipient families improve over time compared to the national average, the children of poor, non-recipient families begin to lag farther behind the national average after the start of the transfer program. My spending analyses suggest municipalities substitute funding away from primary care clinics after many in the municipality receive the federal transfers. 


"Learning from Social Data When Researchers and Social Actors Share Prior Beliefs"

Abstract
Bayesian analyses are often critiqued on the basis of dubious exchangeability claims regarding the data. Not only must observed data be exchangeable, but prior ``data'' must be as well, and the observed data must also be exchangeable with the prior data---an assumption not typically justified by the practitioner. Yet social scientists often utilize social data---observed human behaviors that rely on human judgment---to make inferences. Social priors shared by the researcher are, therefore, non-exchangeable with social data. One common defensive argument offered by Bayesian practitioners is that as long as there is some component of new information in the observed data, repeated observation-updating cycles will still eventually produce a highly informative posterior distribution. In frequentist statistics we have power analyses---a way of estimating how much data we need to get desirable properties from our estimator. Here I develop a model that parameterizes the degree of non-exchangeability between the observed data and the prior data and offers a standard way to calculate how many observations are needed to achieve a parameterized definition of an ``informative'' Bayes' estimate in a single iteration of updating, or the number of updating iterations needed given a fixed observation size $n$ at each iteration. I illustrate the phenomenon with a combination of real and model-synthesized data showing how New York police officers who rely on racial cues to make stops ``learn'' from biased social data---convictions generated by jury trials in the U.S. justice system. The data suggest that police do ``learn'' from highly biased social data, rather than relying exclusively on objective evidence or on their own biased priors.