674M response paper 1

16 minute read

As part of a course on the internet and political campaigns at HKS, I wrote a response paper to David Karpf’s Analytic Activism.


Introduction

“Analytic activism” is a strange phrase to hear for the first time. Those two words don’t conjure up at all the same image: analytics is abstract and quantitative, the sort of thing an analyst does with code and charts to prepare for a business meeting; activism is messier and more personal, something activists do by marching, rallying, having one-on-ones with volunteers and getting arrested at protests. You could forgive someone not familiar with the intersection of the two for doubting that there’s much there.

David Karpf, in his appropriately named 2016 book Analytic Activism, makes a convincing case that there’s rather a lot there: that activist groups in the age of the Internet increasingly rely on the methods of analytics to achieve their goals. Of course, anyone active on the Internet can tell that people like to use it for political activity, whether that activity is sharing articles on Facebook or forwarding petition emails. Karpf’s contribution is to delineate and describe a particular style of data-informed digital activism, one practiced by groups rather than individuals, and especially by groups large enough to use sophisticated statistical methods. (Karpf 2016 ch1, loc 114; subsequent references are to Karpf 2016 except where noted; note that Kindle editions have “locations” rather than page numbers.)

This style of activism has roots as far back as the 1990s, but it really took off in the mid-2000s, with the presidential campaigns of Howard Dean and then Barack Obama. The essentials of analytic activism are familiar from the media coverage of both campaigns. Karpf describes how the Obama campaign “tested everything,” using the results from its experiments to “build larger email lists, raise more money, and spend that money more efficiently” than it would have been able to otherwise (ch1, loc 384). It’s fundamentally an exercise in data-driven optimization of an organization: what Karpf, in a striking phrase quoted from Daniel Kriess, calls “computational management.”1 (ch1, loc 383)

The “how” of analytics

Though Karpf’s discussion focuses most heavily on experimentation and the “culture of testing” (ch1, loc 181), it gives comparatively short shrift to other methods that fall under the heading of analytics and contribute to the data-driven mentality he describes:2

  • Predictive modeling - assigning voters, donors, members or other units an organization interacts with scores that predict their behavior. Having built a number of such models for campaigns and party committees (of candidate support, donation propensity, and other types of behavior), I can attest they’re indispensable to the “microtargeted segmentation” Karpf (ch1, loc 414) attributes to modern campaigns. Sometimes they’re even based on the results of experiments.
  • Business intelligence - the traditional reports-and-dashboards style of conveying information to decision-makers. There are various examples in Analytic Activism of what this looks like in practice, the “MeRA” and “ARRRG” metrics at SumOfUs (ch5, loc 3354) being perhaps the clearest. Karpf incisively discusses the role of such reports as “strategic objects” (ch1, loc 226), but fails to emphasize enough the way in which they are usually also the gateway drug for a broader culture of analytics. Simply put, it’s hard to talk an organization not in the habit of listening to its data into running tests and relying on the results, or changing longstanding segmentation to incorporate a predictive model. I’ve found that a good way to start getting buy-in is with descriptive reports, graphics and especially maps that show senior management its familiar organization in a new light.

Having a fuller picture of these methods can give a better idea of the day-to-day use of “analytics” in an analytic activist organization: it’s frequently about testing, but not always.

Both experimentation and these other methods also shed light on Karpf’s discussion (ch5, loc 2874) of the “boundary conditions” of analytic activism. The “analytics floor,”3 or the existence of a minimum useful scale for analytics and analytic activism, is a consequence of these methods’ statistical limitations. Karpf discusses the limits and sample size requirements of experiments in some detail:

Larger email lists (or number of website visitors, Facebook fans, etc.) yield larger testing groups. Larger testing groups increase the precision of the results, allowing organizations to identify a smaller minimum detectable effect at a statistically significant level. [emphasis in original] (ch5, loc 2980)

Rather than being a hard floor, the usefulness of testing degrades smoothly but nonlinearly as the amount of available data shrinks, reaching limited usefulness for organizations smaller than about Karpf and Greenpeace’s rule of thumb of 500,000-email lists (ch3, loc 2192). He doesn’t discuss it much, but similar limitations apply to predictive models. The data available for predicting how voters, donors or members will behave is rarely predictive enough to allow for useful4 models at sample sizes below several thousand (and in digital contexts, requirements are typically orders of magnitude larger).

Even having the scale required to run a test or build a model isn’t necessarily enough, however - it also has to be cost effective to do so. Karpf, quoting Kevin Collins, explains that an organization also has to “have a large population [from] which you’re drawing a sample.” If the required test groups make up 70% of a political campaign’s district, for example, contact to the remaining 30% is unlikely to be optimized enough for the costs of testing to be worthwhile. (ch5, loc 2984)

What I’ve called “business intelligence,” and the generally data-driven mindset that can accompany it, don’t suffer from these limitations to nearly the same degree. It’s possible to build a useful report, dashboard or map even for quite small activist programs, and it’s no coincidence that these are the analytics methods that small-scale organizations tend to adopt in practice. Even here, though, there are returns to scale: the larger an organization’s email list (or the more field organizers it has on staff, etc), the greater the payoff for optimizing its program.

It is possible to imagine a world in which organizations surmount some of the limitations of the analytics floor by banding together. A group of organizations which are all individually too small to benefit much from practicing analytic activism might be large enough to do so if they work together. In practice, however, attempts to organize such cooperation haven’t been especially fruitful. The collective action problem is formidable, even when there’s a natural central convening authority. After the 2012 election, for example, the DNC launched (Leichtman 2015) an attempt to scale down the 2012 Obama campaign’s practices for smaller campaigns. It achieved some real successes, but the more ambitious goals fizzled out for lack of money and buy-in. Needless to say, if the institutional Democratic Party has a hard time pulling off collective analytics, the more disorganized and fractious world of activist groups is likely to find it even harder. Finally, it’s worth remembering that in Karpf’s taxonomy of the uses of analytics (ch1, loc 430, Table 1.1), all of these methods are very low-level. They’re all part of the how of analytics implementation – methods that can be applied to any goal – rather than the why of organizational purposes the methods are used to assist.

The “why” of analytics

The latter category, as Karpf describes it, ranges in scope from optimizing tactics5, to evaluating them against each other, to incorporating analytics-derived member feedback in governance. (ch5, loc 2975) The broad applicability of analytics methods hints at one of Analytic Activism’s central themes: the value neutrality of its eponymous methods. Karpf recounts how groups from the Sierra Club to, in one memorable example, Uber (ch6, loc 3887) have used the same playbook of analytically informed digital activism to further their very different ends. Even companies or organizations without specific policy goals can and do practice analytic activism:

  • Change.org’s entire business model is analytic activism, using “predictive analytics to serve up petitions to its visitors” (ch2, loc 848) in a way that generates list growth for its partner organizations rather than pushing any specific vision for society.
  • Upworthy, in a different way, is also in the business of analytic activism. Upworthy is more focused on social media (with Change.org petitions frequently spreading through email), but it’s still “heavily invested in the culture of analytics and testing.” (ch1, loc 581)6

None of these organizations do business in the same way or for the same reasons, but all of them have found analytic activism useful.

Regardless of an organization’s final goals, using the methods of analytic activism doesn’t even necessarily dictate its intermediate goals (though it certainly influences them). Karpf discusses at length the distinction Marshall Ganz and others draw between organizing supporters and mobilizing them, and further breaks out campaigning as a category of engagement. It’s worth summarizing these three styles of engagement as he presents them:

  • Activism through mobilization “is about breadth - the number of bodies at a rally, signatures on a petition, or phone calls to a senator.” (ch1, loc 617)
  • Activism through organizing “is about depth - the number of volunteer leaders committed to your cause, the skills and relationships they have developed, and the hours and resources they are willing to give.” (ch1, loc 619)
  • Campaigning, meanwhile, is orthogonal to the mobilizing/organizing distinction. Campaigns are focused on achieving specific goals, like electing a candidate, through whatever means are most effective. Questioning whether it qualifies as “activism” is a recurring subtext for Karpf (e.g., ch1, loc 425), but it’s clearly at least activism-adjacent.

There are compelling examples of employing analytic activism for all three of these styles, but as Karpf discusses it’s clearly most useful for mobilizing and campaigning. Barack Obama’s campaigns for president, which benefited from having the simple goal of winning the election7, pioneered many of the methods. Activist groups which focus on similarly direct and quantifiable mobilization efforts can employ similar methods: a signature on a petition, or a phone call to a voter, is the same whether it’s intended to build long-term power or win a short-term victory.

Organizing is a tougher nut for analytics to crack. Karpf’s discussion of the “analytics frontier” centers on this very issue. Long-term relationship building is hard to quantify, and that’s clearly in tension with the “you are what you measure” spirit of political analytics. (ch5, loc 3059) The difficulty of using analytics for organizing - or similarly challenging questions like “impacts on elite decision makers” (ch1, loc 611) - is usually resolved in one of two ways: groups whose primary mission is organizing don’t adopt much analytics, and don’t benefit as much from analytic activism as other groups who do; and groups which adopt a greater focus on analytics drift away from hard-to-measure organizing. The note Karpf strikes of calling these issues the analytics “frontier” rather than alternative terms like “dilemma” gets it exactly right: the relationship between analytics and organizing can be improved, even if organizing will never be as data-driven as mobilization - it just hasn’t been done yet.

Another component of what he describes as the analytics frontier may prove more challenging. It’s easy to identify what users’ “revealed preferences” are through their interactions with organizations online. They may, and according to Karpf usually do, show a preference for cute animals over long policy discussions, or smaller and more personalized causes like those on Change.org over more systemic ones. But when polled about it, as Avaaz and MoveOn do, these same users will frequently express contrary preferences, called “metapreferences,” for the same things they don’t end up clicking on or interacting with. (ch2, loc 1108) “Both of these preferences are real,” in Karpf’s phrasing - it’s not a case of users lying on surveys. (ch2, loc 1108) He and others frequently use the metaphor of sugar and vegetables in a diet - people will eat too much sugar, given the chance, but they’ll wish they’d had something healthier. The problem (of not neglecting users’ metapreferences) is difficult because all the incentives in digital contexts push the other way. Revealed preferences, broadly speaking, are usually for things that foster more engagement - more clicks, shares and likes - which are also what social media ranking systems and online ad networks reward. Organizations that want to satisfy their users’ metapreferences have to do so deliberately, and frequently sacrifice revenue or reach as a result. Discussing this distinction, and framing it in context, is one of the more valuable services Analytic Activism performs.

The role of the media

But perhaps Karpf’s most useful contribution, and one that in my experience is underappreciated in the practitioner community, is to emphasize the role of the media. He’s not the first to make the point that social movements gain power by influencing the media (the “media theory of movement power”, ch1, loc 240), but he makes it cogently, persuasively, and in a digital context. It makes intuitive sense: no single activist organization has nearly the reach of the mass media. Even on more peer-to-peer social media, journalists tend to be well connected and influential. Appealing to journalists and the media institutions they represent is thus an effective way for a modern social movement to reach those who might not otherwise engage with it, just as the Selma protesters took advantage of “the affordances provided by the broadcast-era media environment.” (ch1, loc 298)

The traditional media, in other words, is a force multiplier for activism, and much of analytic activism revolves around how to use it. Change.org, where “half of our staff are media staff,” understands this well and has built a thriving business on top of it (ch3, loc 1546).

Anecdotally, I’ve found that that recognition is not universal. Even practitioners of analytical methods in politics and activism can have too complete a focus on the program they’re executing rather than the way it’s perceived in the media. Conversations with voters, donations received, phone calls made and all the other metrics that track a successful movement-building or campaign operation are indispensable, but they aren’t the whole story. Things done purely for their value as media objects can have rather a lot of value. (I suspect Donald Trump would agree.) If indeed the professional political analytics community doesn’t focus enough on media coverage and dynamics, it’s natural to wonder why. Certainly, one contributor is the same data “availability bias” that Karpf (ch6, loc 3945) points to in other contexts. When voter files and email lists include all the data needed for running field and digital programs, focusing analytic efforts on those programs makes sense. But I’d speculate that it also stems from the division in campaign and activist culture between ‘field people’ and ‘comms people’. They tend to come from different backgrounds, be siloed off from each other in org charts, and develop different cultural values: long, slow, quiet hard work in one case and rapid, publicity-seeking reactivity in the other. Once data availability connects analytics to field programs, there are very few natural bridges to the comms part of an activist organization.

The continuing power of the offline world

While Karpf’s perspective appropriately centers the media, he doesn’t give enough consideration to offline activity. The Internet hasn’t yet repealed real life, and even a book about online politics could benefit from acknowledging the connection between its digital subject matter and offline political activity.

In a narrow sense, that connection can be operational, within an organization and even within an analytics team. For example, a digital analyst can benefit greatly from the ability to connect digital members or donors to offline data about them (which generally requires the organization have enough offline presence to have invested in such data to begin with).

To give a personal example, I’ve worked on digital analytics for multiple campaigns and the Democratic Party committees, and I’ve always found voter-file data useful. Even given a user’s previous donation behavior, matching an email list against the voter file adds predictive power8. As Karpf also notes at one point (ch2, loc 1285), the best predictors of political behavior are usually demographics and public records: age, sex, race, geographic location, and so forth, though proprietary data in the form of support-model scores can also be valuable.

None of that data would be usable if campaigns were online-only affairs. For real-world data to be useful for analytics, it first has to be available.

And indeed, in a broader sense, that offline data and the programs collecting it are being affected by the methods of analytics at the same time and frequently in the same ways that digital programs are. The experimentation and technologically assisted listening Karpf identifies (ch1, loc 114) as the hallmarks of analytic activism have close analogues in offline politics. Direct-mail, phone and canvass experiments are the bread and butter of more than one political analytics team; analytics-assisted survey programs, which operate differently from traditional polling and exist to feed models of voter or member opinion, are more like the “passive democratic feedback” (ch1, loc 428) of Karpf’s online organizations than he discusses. The similar changes underway in both online and offline programs suggest common causes, and could have enabled a broader and more comprehensive treatment of the analytics phenomenon in politics.

Conclusion

The rise of digital media, from social to search to online news, has had far-reaching effects on politics. Activist organizations have responded to these developments with new modes of activism, in particular with the large-scale, testing- and listening-focused online programs David Karpf identifies as “analytic activism.” When these programs are discussed in the media, they tend to be fit into tropes about either data-science wizardry or invasive big data, but the truth is more mundane.

Digital activism is benefiting from and being affected by the techniques of analytics, as businesses and offline activism are, but with features unique to the digital context. The low cost of sending email, as well as the media’s high propensity to pay attention to things that happen online, make for a qualitatively different environment than the one confronting, say, a direct-mail program. Negotiating the role of these programs in activist communities, both the “floor” under using them and the “frontiers” of questions they can be applied to, is and will remain an active subject of academic and professional research.

References

Karpf, David. Analytic Activism: Digital Listening and the New Political Strategy. Oxford: Oxford University Press, 2016. Kindle edition.

Leichtman, Dave. “What Happened to Project Ivy?” ePolitics.com. https://www.epolitics.com/2015/09/23/what-happened-to-project-ivy/ (published September 23, 2015; accessed January 30, 2018).

  1. Though they both reserve this specific term for a middle-scale sort of optimization, larger than individual tactics but smaller than organizational goal-setting. 

  2. Speaking here from personal experience; as elaborated on in the bullets, I’ve used each of these methods in electoral or issue campaigns at some point. 

  3. I’ve heard this term used in the community since at least 2014 - if Karpf didn’t coin it, it would be interesting to know who did. 

  4. Or nontrivial. After all, the most predictive simple model of digital donations (99%+ accuracy!) is to predict that no one will donate. 

  5. Roughly, I’ve used “tactics” to refer to actions an organization can take in service of its goals (sending fundraising email, hosting local chapter events, etc) and “methods” to refer to the tools of analytics that organization can use to evaluate its tactics. 

  6. Any organization that evaluates 25 potential titles for each piece of content almost has to be. 

  7. “Getting to 270,” in the ubiquitous phrase around headquarters. 

  8. Or, in an experimental context, precision. Better stratification or pair-matching allows an organization to do more powerful tests with the limited sample size that’s available. 

Updated: