All Collections
Model Documentation and Data Sources
Deck - Elasticity model documentation
Deck - Elasticity model documentation
Deck's Partisan Elasticity model October 2020
CG Kelly avatar
Written by CG Kelly
Updated over a week ago

At Deck, we build predictions using an approach we call “contextual inference.” Most predictive models in politics are trained on survey responses. Responses are then generalized to a broader audience based on the demographic and socioeconomic traits of the respondents.

Our approach instead captures real data on individuals’ past behaviors and the context around those decisions to anticipate what people in new contexts might do in the future. Rather than primarily relying on survey data, we anchor our models on evidence of past behaviors and robust data on the context around those behaviors.

In this paper, we’ll discuss how we use that approach to determine which people are most likely to shift away from the political party they previously supported for a specific office. These scores are then used in the Deck web app to help users identify persuasion audiences.

HOW WE DEFINE “PARTISAN ELASTICITY”

In our model, partisan elasticity is the degree to which a voter is willing to shift away from a political party she previously voted for in favor of another. A voter who cast a ballot for Mitt Romney for President in 2012 but voted for Hillary Clinton in 2016 would have demonstrated partisan elasticity. A voter who supported the Democrat both times would not have.

The available data for this exists at the precinct level. If 36% of voters in Precinct A voted for a Democratic state senate candidate in 2014, and 47% of the voters in that precinct supported a Democrat for state senate the next time the office was up in 2018, that precinct would have a partisan elasticity score of 0.11 for 2018.

Note: the voting populations of individual precincts are not stable across time. To address this, we weight our training data by the degree to which pairs of precincts across time are similar.

THE DATA WE USE

This model uses traits of voters, traits of candidates, election-related media coverage, campaign finance reports, and election results.

  • Voter traits -- We use data from TargetSmart (for soft side customers) and the DNC (for hard side customers) to determine a set of demographic and socioeconomic traits for almost all American adults. We then use historic snapshots of the voterfile and the US Census Bureau’s American Community Survey to project the traits of historic populations of voters.

  • Candidate traits -- We rely on VoteSmart, Open States, Reflective Democracy, and various state election agencies to collect information on candidates for office -- including their incumbency status, endorsements from issue advocacy organizations, demographics, history in office, and more.

  • Media -- We license historic and current online, print, and TV news content from Critical Mention and Aylien. We then identify articles and clips related to elections, and match them to the appropriate campaigns. We also use natural language processing tools (both licensed and built by our team) to parse sentiment, topics, and more.

  • Finance -- We scrape itemized and summary campaign finance data from state campaign finance portals and the National Institute on Money in Politics and Illumis to make sure our data is as comprehensive as possible. We then match contribution records to individual people, allowing us to understand how the traits of a campaigns’ contributors are changing over time.

  • Results -- Finally, we gather election results at the district, precinct, and Census block level from Open Elections, TargetSmart, Statewide Database, and a number of state and county public election result portals. These results are used to calculate elasticity.

HOW THE MODELS ARE BUILT

The first step in preparing this model is to assemble its training data. Since elasticity is describing a change that has taken place across elections, we represent information from both the preceding contest for a given office and the current election in our training samples.

We further link the traits of the campaigns within a given contest so that our models can understand the differences between the choices previously and currently in front of voters.

Next, we use our training data to identify the features most likely to have high predictive power -- either alone or in combination with others -- and those most likely to confuse a model into overfitting or diminish the impact of other features. At this stage, we prune highly correlated features and features without meaningful variation, use a technique called VSURF (variable selection using random forest) to better interpret how features will interact with each other, and impute missing data.

Finally, we iteratively design a deep learning architecture to predict our outcome. In this case, we’ve built a six-layer neural network. The model uses the Adam optimization algorithm, optimizing for a low mean squared error.

EVALUATING ACCURACY

To validate this model, we trained a version of it with no knowledge of data from 2018. The model was instead only trained on data from 2010 through 2016 -- containing thousands of unique contest pairs and millions of precinct-level result pairs.

The result was a model with significant predictive power. When validated on 10,000 testing samples from 2018, we found that the model’s area under the ROC curve was 0.75, indicating that the model’s ranking of elasticity aligned with the actual ranking of elasticity in our testing samples 75% of the time. The mean absolute error of our testing predictions was 1.5%. And in a lift chart organized by decile, the top decile had a lift of 335 while the bottom had a lift of 31. This means the people with the top decile of scores were 3.4 times more likely than a random person to have elastic partisanship. Those in the bottom decile were less than a third as likely to have elastic partisanship.

Observations

Mean elasticity (actual)

Mean elasticity (predicted)

Lift

1,000

0.34

0.37

335

1,000

0.17

0.15

166

1,000

0.12

0.11

121

1,000

0.11

0.08

103

1,000

0.07

0.06

72

1,000

0.06

0.04

54

1,000

0.04

0.04

44

1,000

0.04

0.03

42

1,000

0.03

0.03

32

1,000

0.03

0.01

31

Digging into specific contests, we found interesting examples of both good and bad predictions. On the good side (which, consistent with our evaluation metrics, is the large majority of cases),

the swingiest district in our testing data was Kentucky’s HD 87. The average predicted elasticity in this district was 23%. The actual swing was 26%. The least swingy district in our testing data was California’s 15th Congressional District. Our average predicted elasticity in this district was 2%. The results actually shifted by 1%.

On the flip side, our biggest errors mostly occurred in districts that went one or more cycles without a challenge (so the baseline for measuring elasticity was further out). For example, in New Mexico’s HD-68, the average elasticity score was 6%, while the seat actually saw a 13% swing from 2014 to 2018. This is an issue we’ll work to address in a future version of the model.

At the office level, we found that the highest elasticity scores were associated with state legislative races (mean: 8.4%) and state executive races (mean: 8.6%). The lowest elasticity scores were associated with congressional races (mean: 7.1%).

MOST VALUABLE PREDICTORS

While it’s difficult to measure variable importance in deep learning models, we can use the output of our VSURF runs to estimate which variables have the most predictive power. The most significant variables are described below, grouped by category.

Voter traits

  • Likelihood of consuming print media

  • Likelihood of consuming TV news

  • Gender

  • Nearby population density

  • Race

  • Marital status

  • Education status

  • Religious status

  • Party affiliation

Candidate traits

  • Age distribution of contributors

  • Incumbency status

  • Office type

  • Total amount of online news coverage

  • Total amount of TV news coverage

  • Count of unique donors

  • Race

  • Gender

  • Sentiment distribution of news coverage

ALTERNATE APPROACHES

The goal of these predictions is to improve persuasion targeting. We believe that our context- driven approach allows us to give campaigns the most dynamic look available at which voters may be open to a persuasion message or at risk of switching from supporting a Democrat to supporting a Republican in a given contest. Our reliance on historic data rather than survey data also means that these predictions can be generated, and updated, at a relatively low price point.

However, for well-resourced campaigns, we believe an experiment-informed program would offer a better approach -- running surveys alongside advertisements and voter contact efforts to understand in real time how your specific messages, in your specific context, are landing with specific audiences. This approach is not for most campaigns, since it is expensive and requires significant time and expertise.

Did this answer your question?