BY ASHLEY LEE
Increasingly, we have come to think of digital platforms such as Facebook and Twitter as the public sphere, a place where citizens can freely discuss the issues of the day and engage with a diversity of opinions. However, the core business model of these companies—selling ads by harvesting the attention of targeted cross-sections of the public—is often at odds with these goals. Algorithms driving these commercial platforms are designed to capture, manipulate, and predict attention based on massive data collected about their users – without necessarily encoding healthy democratic values. Following recent political events such as the 2016 U.S. election and the success of the Leave campaign in the U.K., authorities found evidence that digital platforms can be used to undermine democratic processes by spreading false information, promoting tribalism, and manipulating public opinion through micro-targeting.
Our conversations predominantly focus on specific actors and mechanisms that contribute to these disruptions. It is incumbent on policymakers to drive society-wide conversations about the digital platforms that structure and govern our everyday civic life, and the ideals (e.g. deliberative, participatory) that should inform the design of such platforms. These leaders have an important role to play in promoting and supporting conversations around such issues at all levels – local, regional, national, and transnational – bearing in mind that those in the social margins have less voice and influence, but may have to bear the greatest cost.
The Issues at Stake
Algorithms driving digital platforms cater to the human predisposition toward tribalism and our tendency to band with like-minded others. These predispositions, combined with technology, can cause disinformation to spread virally through self-reinforcing circles on social media, as evidenced by the spread of the #Pizzagate conspiracy during the 2016 U.S. election. In short, your Facebook News Feed will serve up more of what you and your friends like to see.
The automated tools, online anonymity, and the disappearance of gatekeepers have paved the way for troll factories and political bots that seek to manipulate attention and vie for influence, even beyond nation-state borders. The result is an increasingly muddy information landscape in which it becomes extremely challenging for citizens to sort fact from fiction.
Muddying the information ecosystem is common practice in authoritarian regimes: Russia, for example, employs disinformation propaganda to create confusion and skepticism about information and engender cynicism and political inaction. China has unleashed state-backed Internet commenters to sway public opinion and divert citizens’ attention in times of political crises. The ongoing Mueller investigation in the U.S. shows that representative democracies are not insulated from similar influences, especially as political actors across the world develop more sophisticated cyber capabilities aimed at interfering with democratic processes.
Compounding the issue, surveillance technologies are expected to become increasingly sophisticated and affordable, allowing powerful elites to harvest and analyze even more data for social and political control. The rise of data-mining and analysis firms specializing in strategic political communication, such as Cambridge Analytica, have kept pace with the decreasing costs of political micro-targeting. At the same time, advances in machine learning are expected to contribute to predictive governance. China recently announced its plan to roll out a credit scoring system to rate its 1.3 billion citizens, and the Trump administration is interested in machine learning tools to help implement extreme vetting at the US borders.
Balancing Algorithmic Civic Life
Is the future of civic life a foregone conclusion? Are we spiraling towards a dystopian future in which citizens become subjects of technology-powered control? The answer depends on how all citizens are empowered to exercise agency to shape the future of digital life.
Individuals and communities, especially those in the margins, may not have the resources, skills, and knowledge to participate in such conversations. Policymakers must mobilize resources to enable their participation.
Government agencies (e.g. The Department of Education) must allocate funding to support educational efforts that empower citizens as critical consumers and producers of these new digital infrastructures. This means going beyond analyzing media content, or learning about security and privacy settings, the focus of many media literacy programs that exist today. As digital platforms increasingly come to serve as extensions of public spheres and infrastructures of governance, citizens—and not just political and academic elites—should be equipped to think critically about the infrastructures that structure and govern their own lives: What values and principles are being encoded into our technological infrastructures? What democratic values and principles should inform the design and adoption of these tools? What alternative infrastructures are possible? Ultimately, citizens should be equipped with knowledge and skills to co-create alternative digital future(s) they want.
Policymakers must aim for open and inclusive processes in formulating policies around digital platforms. Inclusive policymaking processes will help recognize and reduce bias and discrimination in policies towards individuals and communities that may be subject to technologies of governance and control—and whose voices are currently silenced. Such conversations should not be relegated to Facebook, Google, and a handful of political and corporate actors. These interventions might include referenda, public petitions, town halls and consultations executed online and offline, in arenas that reach these populations.
Policies must democratize algorithms and our digital public spheres at all levels. Stakeholders can work together to create public demand for competing platforms that strive for balance between public good and business, deploy new civic curricula where students learn to think critically about digital governance systems and processes, and allocate public media investments to promote content that educates all sides. In order to ensure that algorithms that drive our digital civic life meet transparency and accountability standards, a multi-pronged approach should be taken to empower the fourth estate with watchdog powers, build community forums where citizens can regularly voice their ideas and concerns about digital public spheres and digital governance, and incentivize alternative business models that allow people to take back control of their data. Planning for inclusion in the algorithmic society, policymakers can use public diplomacy to fight deterministic narratives about technological futures that leave out human agency, and create incentives for cultivating a more diverse engineering workforce required to design algorithmic systems that are responsive to the diverse needs of our society. Each solution area represents a set of necessary conversations in which few are currently equipped to participate.
We can, and we must, empower all citizens to participate in reimagining our digital future – using technologies that promote inclusion, agency, and ownership.
Ashley Lee is a doctoral candidate and a Social Sciences and Humanities Research Council of Canada Doctoral Fellow at Harvard Graduate School of Education. Her current research examines how new communication technologies are used for civic engagement and social control in democratic and authoritarian countries. She is the Director of Civic Tech with The Future Society at HKS. Ashley holds a BS in Computer Science from Stanford University.