BY RAINA DAVIS, BO JULIE CROWLEY, AND CASEY CORCORAN
Editor’s Note: Information for this article was obtained primarily from interviews by the authors. Names and identifying information have been withheld in some cases to protect the identity of the interviewees.
In the aftermath of the 2016 US presidential election, reports of Russian interference and accusations of biased news coverage gave rise to a renewed interest in how information influences politics. Efforts to examine this phenomenon center on three major terms with distinct meanings: “misinformation,” false or incomplete information; “disinformation,” false information spread with the intent to deceive; and “fake news,” a political expression used to criticize a news story or media outlet.
Disinformation tactics date as far back as Ancient Rome and have spanned conflicts such as World War I, the Russian Revolution, and the Cold War. Recent technology and media trends, however, have transformed the impact misinformation has on modern international politics. The spread of social media enables individuals, politicians, and foreign actors to connect directly with a global audience. At the same time, the proliferation of news networks and the 24-hour news cycle threatens editorial standards amidst intense competition and sensationalized news coverage.
While misinformation is a global phenomenon, the majority of research has focused on Western democracies. The authors traveled to Taiwan and South Korea to examine how Asian democracies have experienced and combatted misinformation.
Firsthand interviews with government officials, representatives of media organizations, and civil-society groups revealed that Taiwan experiences disinformation largely in the form of foreign influence from the People’s Republic of China. According to government officials in the Mainland Affairs Council of Taiwan, the Communist Party of China has sought to undermine Taiwanese autonomy and democratic governance, influence politics in favor of Beijing, and promote reunification since Taiwan asserted independence in 1949.
In contrast, the conversation about disinformation in South Korea centered largely on how domestic politicians, rather than foreign actors, spread disinformation for political gain and used the phrase “fake news” to delegitimize politically damaging stories and opposition. Current and former government officials and journalists expressed concern about this trend, with one journalist at a major newspaper describing fake news as a tool government uses to control the Korean people.
Democratic governments lack a consolidated strategy to combat misinformation. Additionally, many interviewees articulated concern that government intervention could infringe on civil liberties and free speech. Given the limitations of a government-led response, civil-society avenues have emerged as a powerful alternative. Civil society, comprising non-governmental and not-for-profit organizations that work toward solving public issues, is independent of both government and business. This allows some civil-society organizations to claim neutrality in the fight against misinformation and validate news quality in ways that could be considered censorship under a government agency.
Research conducted by the authors for the Belfer Center for Science and International Affairs at Harvard University’s John F. Kennedy School of Government in January 2019 uncovered several ways in which civil societies in Taiwan and South Korea have chosen to confront the challenge of misinformation. A key finding was that civic leaders face two main decisions when forming their organizations: what media avenues to engage with and what background or expertise their organizations bring to address the issue. These decisions determine the effectiveness of the organization and the types of misinformation they can confront.
There are three main avenues through which misinformation spreads: closed-network social media, open-network social media, and traditional media. Closed networks, such as WhatsApp, the Japanese messaging application Line, and the Korean messaging application KakaoTalk, present unique challenges for the fight against misinformation. They facilitate person-to-person message exchange, which inhibits third-party monitoring and identification of false stories. Additionally, the civil-society fact-checking platforms CoFacts and SNU Factcheck both stressed that users of these apps are more likely to inherently trust information exchanged on closed networks because they are sent by friends and family.
To overcome these obstacles, the Taiwan-based platform CoFacts has created a chatbot that uses crowdsourcing to fact-check messages within the application Line. When Line users encounter a potential piece of misinformation, they can forward the message to the CoFacts chatbot. The chatbot returns a result based on a database of previously forwarded false messages. But if no match exists, a volunteer editor reviews the message content. The editor then assigns a rating and communicates the finding to the Line user. Editors flag false stories with one of four possible ratings: not applicable (not related to fact-checking), personal opinion (not verifiable information), correct message, or false message.
This crowdsourced model offers a flexible and scalable solution. Anyone is eligible to become an editor, and multiple editors can comment on a single story to offer alternative opinions. This enables partisan dialogue while maintaining CoFact’s credibility as an impartial third-party platform. However, this approach could be easily co-opted by malicious actors using biased editors or bots to flag false information as true. The founders of CoFacts acknowledge this danger but have prioritized growing their reach and user base before they worry about future opportunities for abuse.
CoFacts has had less success in combating misinformation on open networks. Facebook is incompatible with their chatbot, and they have not developed a model that would align with Facebook’s third-party-application requirements. To circumvent working with large social media companies, Taiwan Factcheck Center (TFC) reviews potential misinformation on its own website and publishes its findings on their own social media pages. Unlike CoFacts, TFC employs career journalists to review each story. They publish lengthy reports that adhere to international press norms around fact-checking and citation. However, the resources required to investigate each story limit the amount of misinformation to which they can respond. Yisuo Tzeng of Taiwan’s Institute for National Defense and Security Research stated that the government believes countering a false narrative is only effective within the first 48 hours after a story goes viral. Due to TFC’s time-intensive process, their review process can last days or weeks, severely limiting their effectiveness.
In an effort to combat misinformation in real time, the Taiwan-based group Watchout has pioneered live fact-checking during political debates. During the last election cycle, Watchout obtained the approval of major television networks to overlay a live fact-checking banner on a special debate stream accessible from Watchout’s social media channels. A team of fact-checkers flagged suspicious statements and reported on their validity within one to five minutes. While they view fact-checking as an objective pursuit, the founders of Watchout are clear that they are personally in favor of Taiwanese independence and do not present themselves as a neutral organization. Watchout aims to counter false statements by all candidates to increase awareness of misinformation as a phenomenon and specifically as a threat to Taiwanese politics. However, in openly embracing partisanship, they open themselves up to accusations of bias in their work. Additionally, although news networks did agree to allow them to livestream the debate, they were prohibited from broadcasting it on certain popular social media platforms because the networks worried it would draw viewers away from their channel. Therefore, this model requires stronger partnerships with media and technology companies to be effective.
Mainstream media outlets wield a level of credibility that can amplify false stories but can also help invalidate misinformation. One former national security aid described how traditional media plays a prominent role in Korean society, reinforcing the notion that “whatever the media delivers, it becomes fact.” Seoul National University Factcheck Center, run by SNU’s Information Institute, provides an online platform for media outlets to rate certain stories from true to false. The platform aggregates an overall score and displays the results and explanations on its own website and under the news section on Naver, South Korea’s most popular search engine site. SNU’s approach leverages the fact-checking resources and reputations of major media outlets. Liberal, conservative, and neutral sources are represented on the platform and given an equal opportunity to comment on stories. SNU Factcheck, however, must approve each media outlet, rendering academics gatekeepers for what constitutes a legitimate news organization.
Misinformation is not a new phenomenon, and democracies should expect to face more frequent, influential, and sophisticated cases of disinformation. Open societies enable the free flow of information regardless of whether it is true, false, or malicious. Therefore, misinformation should not be viewed as a disease to be cured but a multifaceted condition with innumerable methods of treatment. Each civil-society group described above works to address a specific component of misinformation. Their approaches have individual limitations, but in the aggregate, these disparate civil-society groups offer a more comprehensive way of fighting misinformation. Rather than searching for a sweeping top-down solution, an effective strategy may involve identifying and supporting a diverse field of organizations that are committed to addressing this issue.
Raina Davis is a master in public policy student at the John F. Kennedy School of Government at Harvard University and a Belfer International and Global Affairs student fellow. Prior to attending HKS, she worked as a research coordinator at Columbia University in the Office of Global Centers and Global Development, where her research focused on education, democratization, and geopolitics in the Arab world.
Bo Julie Crowley is a master in public policy student at the John F. Kennedy School of Government at Harvard University and a Belfer International and Global Affairs student fellow. She previously worked as a cybersecurity consultant for PricewaterhouseCoopers, where she led cyber strategy projects for Fortune 50 technology and communications companies.
Casey Corcoran is a dual-degree master in public policy and Juris Doctorate student at the John F. Kennedy School of Government at Harvard University and Harvard Law School and a Belfer International and Global Affairs student fellow. Prior to enrolling at Harvard University, he served as a captain in the United States Army.
Edited by: Nusheen Ameenuddin
Photo by: NASA
 Julie Posetti and Alice Matthews, A short guide to the history of ‘fake news’ and disinformation (International Center for Journalists, July 2018) [PDF file].
 Author interview with a journalist who asked not to be identified by name.
 The World Bank defines Civil Society as “a wide array of non-governmental and not-for-profit organizations that have a presence in public life.”
 Author interview with a national security aide who asked not to be identified by name.