Study Design

The CCES is a 50,000+ person national stratified sample survey administered by YouGov Polimetrix. Half of the questionnaire consists of Common Content asked of all 30,000+ people, and half of the questionnaire consists of Team Content designed by each individual participating team and asked of a subset of 1,000 people. In addition, several teams may pool their resources to create Group Content.

The survey consists of two waves in election years. In the pre-election phase, respondents answer two-thirds of the questionnaire. This segment of the survey asks about general political attitudes, various demographic factors, assessment of roll call voting choices, and political information. The pre-election phase is administered late September to late October and rolled out in three distinct time-periods, the end of September, the middle of October, and the end of October. Spacing of interviews across these intervals allows researchers to gauge the effects of campaign information and events on the state and district electorates. In the post-election phase, respondents answer the other third of the questionnaire, mostly consisting of items related to the election that just occurred. The post-election phase is administered in November.

In non-election years, the survey consits of a single wave conducted in the early fall.

Common Content

Common Content consists of approximately 60 questions, withou about 40 in the pre-election wave and about 20 in the post-election wave. These questions are included on all administered surveys and amount to a 30,000+ person national sample survey. Common Content is asked at the beginning of each survey.

In addition to these questions, YouGov Polimetrix provides demographic indicators, party identification, ideology, and for most states, validated vote. 

Click below to see the current and past years' Common Content:

CCES_2009_Common.doc72 KB
CCES_2010_Common.doc120 KB
CCES_2006_Common.pdf126 KB

Group Content

Group Content arises from pooling of resources of several teams. These teams may wish to devote, say, 8 questions to a particular topic, such as religion and politics or tax policy. If 10 teams can agree on 8 Group Content questions, then their questions will be asked on each of their individual Team surveys. Since the individual Team Content is the same for these 8 questions, they have effectively conducted a survey of 10,000 people (10 teams times 1,000 respondents each).

There are opportunities to ask questions particular to individual states if a sufficiently large number of teams are willing to trade questions across samples. (See the discussion under Sample Design.)

Sample Design

The survey sample is constructed by YouGov Polimetrix, using a matched random sample technique. The firm begins with two lists, one of all consumers in the United States, covering approximately 95 percent of the adult population, and a list of people who have agreed to take surveys for YouGov Polimetrix as a part of their PollingPoint panel. All YouGov Polimetrix surveys are conducted on-line using this opt-in panel of respondents. For each list, YouGov Polimetrix has an extensive set of demographics.

In the first stage, a random sample of consumers is drawn. A list of key demographic variables is then recorded for every member of the sample. In essence, each individual drawn is represented as a cluster of demographic characteristics, including age, income, education, race, gender, longitude and latitude, etc. In the second stage, YouGov Polimetrix uses a matching algorithm to find the PollingPoint panelist who is the closest match to the person drawn off the consumer file. In this way a complete, matched random sample is constructed for all people in the sample.

The desired sample to be drawn for the CCES is a stratified national sample of registered and unregistered adults. We choose strata that guarantee that the study achieves adequate samples in all states. There are three sorts of strata in the sample: Registered and Unregistered Voters, State Size, and Competitive and Uncompetitive Congressional Districts.

By stratifying on registered and unregistered voters we can create a nationally representative sample of US adults using appropriate sample weights. Because the preferences of voters is of particular interest to many researchers, we oversample registered voters. Approximately three-fourths of US adults are registered to vote. In midterm elections approximately half of registered adults vote. The oversample of registered voters means that the actual voters in the sample approach one-half of the sample.

Stratification on state size is used to guarantee adequate sample sizes in small states. There are four strata for state size: one Congressional District states, two Congressional District states, three Congressional District states, and four or more Congressional District states.

Stratification on competitive congressional districts guarantees an adequate number of districts in which there are very active political campaigns in the fall election.

Altogether this sampling scheme minimizes the number of strata, so as to prevent mistakes, while guaranteeing adequate coverage of all relevant jurisdictions. Adding together each of the possible combinations results in 16 strata.

As a result, each state has sufficient coverage so that any team interested in the general politics of a given state can build a state-level survey of approximately 60 questions related to their state. In addition, if a sufficiently large group interested state politics emerges they may be able to trade questions across groups in such a way as to augment the Common Content. If a large enough number of teams agree to swap content in this way, then they can trade questions such that every time a respondent from a particular state is chosen in any survey within the Group then the question relevant to that particular state is used.

For example, suppose 15 teams have particular state level questions that they would like to ask. Say, Ohio wants to ask two questions about Ohio propositions, Michigan wants to ask two questions about a hot contest for Secretary of State in Michigan, Florida wants to ask two questions about the 2000 election, etc. Every time an Ohio respondent arises in any sample from among this Group’s members’ surveys, the Ohio questions are asked. Every time a Michigan respondent arises in any sample from among this Group’s surveys the Michigan questions are asked. And so forth.

In this way groups can exploit the sample design to develop unique state-level surveys. This strategy seems particularly attractive for larger states from which a disproportionate number of cases will likely be drawn.

Teams

Click on one of the links below to see all of the participating teams for that year's study and individual team pages. 

CCES 2010

CCES 2009

CCES 2008

CCES 2007

CCES 2006

CCES 2009

  • Harvard/University of Strathclyde/ University of Pittsburgh
  • Indiana University
  • Ohio State University
  • Princeton University
  • University of Akron
  • University of California, Merced
  • University of MIssouri
  • University of North Carolina
  • University of Strathclyde
  • Yale University

CCES 2008

  • American Enterprise Institute/Brookings
  • American University (CCPS)
  • Brigham Young University
  • Caltech
  • Dartmouth University
  • Duke University (2)
  • Florida State
  • Fordham University
  • George Washington University
  • Harvard University/Massachusetts Institute of Technology
  • Indiana University
  • New York University/Berkeley
  • Ohio State University
  • Pew Foundation
  • Princeton University
  • Stanford, Hoover Institution
  • University of Akron
  • University of California, Merced
  • University of California, Riverside/Ohio State University/Harvard
  • University of California, San Diego
  • University of Maryland
  • University of Maryland, Baltimore County/NCOBPS
  • University of Missouri
  • University of North Carolina
  • University of Strathclyde
  • University of Texas  (2)
  • Utah State University
  • Yale University

CCES 2007

  • Akron University
  • George Washington University
  • Indiana University
  • Massachusetts Institute of Technology
  • University of California-Merced
  • University of Maryland
  • University of Missouri
  • UC Davis/Dartmouth/UC Riverside/Rice
  • University of Pittsburgh

CCES 2006

  • Arizona State University
  • Brigham Young University
  • CalTech
  • Columbia University
  • Dartmouth College (2)
  • Florida State University
  • George Washington University
  • Harvard University
  • Massachusetts Institute of Technology
  • Michigan State University
  • Notre Dame University
  • Stanford University
  • Temple/Reed/UC Merced/Washington
  • Trinity College
  • University of Akron
  • University of California, Berkeley
  • University of California, Davis
  • University of California, Los Angeles (2)
  • University of California, San Diego
  • University of California, Riverside/Ohio State University
  • University of Chicago
  • University of Illinois
  • University of Maryland
  • University of Michigan  (2)
  • University of Minnesota
  • University of North Carolina/Duke University
  • University of Pennsylvania
  • University of Pittsburgh
  • University of Texas, Austin
  • University of Washington, Seattle
  • University of Wisconsin
  • Vanderbilt
  • Yale University