The Ethical Resolve Blog

dot background pattern

Cambridge Analytica’s behavioral modeling illustrates new ethical dangers of predictive analytics

The Cambridge Analytica scandal is sprawling and often murky, and will certainly continue to evolve. One thing is now certain, however: predictive analytics is about more than just targeted advertising and media recommendations, it is also about how our society is structured and who has the power to reshape it. Algorithmic prediction—even relatively straightforward A/B testing—is now a vector for types of activities that most businesses would consider undesirable and tremendously risky in any other context. This applies doubly to platform companies who are now in the position of being a trusted third party between their users and aggressive bad actors seeking to manipulate them.

Ethical Resolve Founder Jake Metcalf has written several articles on Cambridge Analytica that demonstrate how data collection, access and analytics can carry significant societal and business risks when not carefully controlled.

In MIT Technology Reivew: “Facebook may stop the data leaks, but it’s too late: Cambridge Analytica’s models live on”

  • “But focusing solely on the purloined data is a mistake. Much more important are the behavioral models Cambridge Analytica built from the data. Even though the company claims to have deleted the data sets in 2015 in response to Facebook’s demands, those models live on, and can still be used to target highly specific groups of voters with messages designed to leverage their psychological traits. Although the stolen data sets represent a massive collection of individual privacy harms, the models are a collective harm, and far more pernicious.In what follows, I argue that Cambridge Analytica and its parent and sister companies were among the first to figure out how to turn behavioral algorithms into a portable worldview—a financially valuable, politically potent model of how humans behave and how society should be structured. To understand Cambridge Analytica, the anti-democratic vision it represents, and the potentially illegal behavior that its techniques may make possible, follow the models.”

In Slate: “Are Algorithms the New Campaign Donation?”

  • “However, in the long run, the most consequential claim Wylie made might be that GOP megadonor and Cambridge Analytica’s chief financier Robert Mercer may have funded the development of the behavioral models at the heart of this story at a loss. You might ask: So what? A conservative billionaire sunk a lot of his own money to build a wonkish tool to help Republican candidates find voters more effectively—that doesn’t seem particularly odd.

    Yet this development heralds a new and dangerous age of algorithmic electioneering, one that warrants significant legal scrutiny. Such behavioral models move far more freely and easily between political entities than voter data itself does, enabling organizations that are separated by law to coordinate on core campaign decisions, particularly if they are licensed at prices far below their hard-to-determine fair market value.”

In Slate, with Casey Fiesler: “One Way Facebook Can Stop the Next Cambridge Analytica: Give researchers more access to data, not less.”

  • “This case raises numerous complicated ethical and political issues, but as data ethicists, one issue stands out to us: Both Facebook and its users are exposed to the downstream consequences of unethical research practices precisely because like other major platforms, the social network does not proactively facilitate ethical research practices in exchange for access to data that users have consented to share.”
More Resources
ER Logo
© 2019 All Rights Reserved