At Deck, we build predictions using an approach we call “contextual inference.” Most predictive models in politics are trained on survey responses. Responses are then generalized to a broader audience based on the demographic and socioeconomic traits of the respondents.

Our approach instead captures real data on individuals’ past behaviors and the context around those decisions to anticipate what people in new contexts might do in the future.

In this paper, we’ll discuss how we use that approach to determine how likely a given person will be to vote for a specific candidate for office. These scores are then used in the Deck web app to help users identify their likely supporters.

WHY DO DEMOCRATS NEED ANOTHER SUPPORT SCORE?

Right now, most Democratic campaigns identify likely supporters with one of three different approaches:

  1. expensive but high-quality survey-based predictive models, using thousands of individual survey responses to predict the traits of supporters

  2. generic survey-based models that identify who is most likely to identify as a Democrat, or

  3. demographic and socioeconomic filtering based on local knowledge of a district (e.g., focusing on young voters, voters of color, recent registrants, or registered Democrats).

However, these approaches aren’t always a good fit. Campaign- specific survey-based models are too expensive for most of these campaigns to afford, and their districts are often too small to get the number of IDs one of these models would require. And generic signals of partisanship can’t always be trusted the further you go down the ballot.

In our analysis of generic Democratic Party support scores provided by other Democratic data vendors, we’ve found that these scores are well-correlated with actual results in federal races, but explain less variation in support for Democrats running in state legislative and local races.

We believe these campaigns, with limited resources and so much on the line, are the most in need of high quality targeting data. Our support scores are built to fill that need.

THE DATA WE USE

This model uses traits of voters, traits of candidates, election-related media coverage, campaign finance reports, and election results.

  • Voter traits -- We use data from TargetSmart (for soft side customers) and the DNC (for hard side customers) to determine a set of demographic and socioeconomic traits for almost all American adults. We then use historic snapshots of the voterfile and the US Census Bureau’s American Community Survey to project the traits of historic populations of voters.

  • Candidate traits -- We rely on VoteSmart, Open States, Reflective Democracy, and various state election agencies to collect information on candidates for office -- including their incumbency status, endorsements from issue advocacy organizations, demographics, history in office, and more.

  • Media -- We license historic and current online, print, and TV news content from Critical Mention and Aylien. We then identify articles and clips related to elections, and match them to the appropriate campaigns. We also use natural language processing tools (both licensed and built by our team) to parse sentiment, topics, and more.

  • Finance -- We scrape itemized and summary campaign finance data from state campaign finance portals and the National Institute on Money in Politics and Illumis to make sure our data is as comprehensive as possible. We then match contribution records to individual people, allowing us to understand how the traits of a campaigns’ contributors are changing over time.

  • Results -- Finally, we gather election results at the district, precinct, and Census block level from Open Elections, TargetSmart, Statewide Database, and a number of state and county public election result portals. These results are used to calculate elasticity.

WORKING WITH AGGREGATE RESULTS DATA

We are able to build campaign-specific, individual-level models at scale by relying on historic precinct- and block-level election results. Training a model using an aggregate response (e.g., precinct-level results rather than individual-level survey responses) creates a potential issue known as ecological inference. However, we’ve worked to mitigate these issues and have ended up with a model that validates very well on individual-level survey responses despite not being trained on them (see below, “Survey Validation.”)

The process starts by taking a contemporary snapshot of registered voter traits at the individual-level. As described above, our primary sources for this data are the DNC (on the coordinated side) and TargetSmart (for independent entities).

Then, to better understand the traits of these voters at the time of the elections we have results for, we have developed historic representation weights that describe how representative each person would have been of the population in their census block at the time of each past election.

We use data from the American Community Survey (ACS) and historic voterfile snapshots to produce these weights. The ACS is a product of the U.S. Census that provides annual rolling insights into how the American population is changing at the level of small geographies, such as block groups and census tracts. It is produced using a much smaller sample than the decennial census, but it provides deeper details.

With this data, we have produced historic registration scores, showing the probability that a person would have been registered to vote for past elections, and historic turnout probabilities. Taken together, this approach allows us to leverage contemporary population data for past elections -- benefiting from deep, high quality information about individual voters that was not available in the past.

We then use those probability weights to reconstruct what the voters in each precinct or census block looked like at the time of past elections. Here, we represent the distribution of voter traits rather than just simple means of traits. A 10-person block in which 5 voters have no income and 5 voters make $100,000 a year is very different from a block in which all 10 voters have incomes of $50,000 a year. Our distribution-centric approach allows our models to learn those differences, which mitigates the ecological inference issues we might otherwise face.

HOW THE MODELS ARE BUILT

The first step in preparing this model is to assemble its training data. After taking the steps described above to build historically representative voter traits and election results, we gather the traits of the candidates on the ballot in a given contest; features describing the volume and sentiment of media coverage of their race; detailed campaign finance data, including the demographic traits of contributors; and data on which audiences were most likely to be exposed to certain types of media coverage.

Each instance of a precinct/block-level result, voter traits, and candidate traits constitutes a single training sample. Our database currently includes over 940 million training samples for our candidate support models.

Next, we use our training data to identify the features most likely to have high predictive power -- either alone or in combination with others -- and those most likely to confuse a model into overfitting or diminish the impact of other features. At this stage, we prune highly correlated features and features without meaningful variation, use a technique called VSURF (variable selection using random forest) to better interpret how features will interact with each other, and impute missing data.

Finally, we iteratively design a deep learning architecture to predict our outcome. In this case, we’ve built a ten-layer neural network. The model uses the Adam optimization algorithm, optimizing for low binary crossentropy.

EVALUATING ACCURACY

To validate this model, we trained a version of it with no knowledge of data from 2018. The model was instead only trained on data from 2010 through 2016 -- containing millions of unique campaign-voter representations.

The result was a model with significant predictive power. When validated on over 56,000 testing samples from 2018, we found that the model’s area under the ROC curve was 0.88, indicating that the model’s ranking of candidate support aligned with the actual ranking of candidate support in our testing samples 88% of the time. The model’s sensitivity (or true positive rate) was 0.90, its specificity (or true negative rate) was 0.88. And in a lift chart organized by decile, the top decile had a lift of 228 while the bottom had a lift of 7. This means the people with the top decile of scores were more than twice as likely as a random person to support a given candidate. Those in the bottom decile were less than a tenth as likely to support a given candidate.

Observations

Mean support

(actual)

Mean support

(predicted)

Lift

5,623

0.81

0.93

228

5,624

0.72

0.87

203

5,623

0.60

0.70

169

5,624

0.49

0.37

139

5,623

0.39

0.13

111

5,624

0.25

0.05

72

5,623

0.18

0.02

51

5,624

0.14

0.00

12

5,623

0.03

0.00

8

5,624

0.02

0.00

7

MOST VALUABLE PREDICTORS

While it’s difficult to measure variable importance in deep learning models, we can use the output of our VSURF runs to estimate which variables have the most predictive power. The most significant variables are described below, grouped by category.

Voter traits

  • Age

  • Gender

  • Race

  • Marital status

  • Education status

  • Party affiliation (if available)

  • Likelihood of consuming print media

  • Likelihood of consuming TV news

  • Nearby population density

Candidate traits & context

  • Partisanship of contributors

  • Fundraising compared to previous candidates for the same office

  • Incumbency status

  • Polling average (if available)

  • Total amount of online news coverage

  • Total amount of TV news coverage

  • Count of unique donors

  • Sentiment distribution of news coverage

SURVEY VALIDATION

Through an analysis of survey responses collected by YouGov in North Carolina throughout September 2020, we were able to see how our support scores for Joe Biden, Cal Cunningham, and Democratic candidates for the U.S. House compared with actual stated support for these candidates. We were also able to see how our scores compared to generic scores developed by other Democratic data vendors. Raw data on this analysis is available here.

Below, you can see which share of survey respondents in one of five Deck support score buckets indicated support for a given Democratic candidate. In most cases (except in very low sample size buckets, as indicated by the parenthetical), the share of survey respondents indicating support falls squarely within the expected Deck probability range.

Deck support score range

Surveyed Biden

support

Surveyed Cunningham support

Surveyed House Dem. support

80% - 100%

91% (164)

93% (167)

91% (152)

60% - 79%

65% (23)

71% (22)

61% (19)

40% - 59%

70% (10)

90% (11)

78% (9)

20% - 39%

65% (23)

61% (20)

55% (24)

0% - 19%

8% (171)

9% (171)

16% (187)

We also took a look at how our scores predicting support for Cal Cunningham compared with survey-based Democratic Party support scores from another vendor. As shown in the area under the curve and gains charts below, we found that our scores were more precise..

At the same time, the DNC’s in-house partisanship score, which is updated daily using survey IDs collected by Democratic campaigns across the country, was slightly more precise than our scores across the federal races that we analyzed. A review of accuracy metrics covering our scores, the DNC’s scores, and another vendor’s scores are shown below.

Deck

Other vendor

DNC

AUC

0.96

0.93

0.96

MSE

0.10

0.11

0.08

Accuracy

0.87

0.87

0.88

Sensitivity

0.91

0.90

0.90

Specificity

0.84

0.84

0.86

Maximum lift

189

184

189

Minimum lift

0

0

0

HOW WE HANDLE RACES WITH NO OFFICIAL CANDIDATES

We frequently have campaigns and organizations interested in using our scores before primaries (or even candidate filings!) have taken place. We handle these cases by building averages of historic candidate traits for unique party, office, and census block combinations (with more recent candidate traits weighted more heavily). We then take those “composite” candidate traits, current voter traits, and current socioeconomic context to generate support scores using the model described above.

Did this answer your question?