Wikipedia’s Balancing Act: A Tool for Collective Intelligence or Mass Surveillance?

In this paper from Liu, we explore how Collective Intelligence (CI) might face an untimely death, a “chilling effect,” when co-opted by mass surveillance mechanisms. Contributors to a CI system (and in general) desire privacy. Policies like public tracking of edit histories on Wikipedia can feed into intelligence analyses conducted by federal agencies like the NSA that intrude on privacy, thus inhibiting participation.

WIKI POWER: Linus’ Law explains that “having enough contributors will eventually replace existing inaccurate information with verifiable and higher-quality input.” This is a powerful mechanism in an increasingly complex world with vast areas of knowledge and events that people need information on. A centralized effort to collate all that information, do so reliably, and make that available at a low cost is impossible. 

WHY CONTRIBUTE: Wikipedia serves as a great example (emulated as Intellipedia within the US Intelligence Community with great success), where a diverse set of contributors come together to help improve the quality and breadth of high-quality information available for communal use. Intrinsic motivation is a major factor, as people are driven to contribute to the platform due to a sense of satisfaction and recognition from their peers. Extrinsic motivation is also a factor, as people may be driven to contribute to Wikipedia for personal goals or to gain recognition from the community.

WHY IT MATTERS: Wikipedia is used as an academic resource, cited in judicial opinions, and used to combat disinformation online. Due to the high traffic volume, it has high visibility on the front page results of search engines, thus amplifying its significance and role when people are hunting for information. It has played a role in the 2012 SOPA discussions and the 2016 US Presidential elections and continues to impact how participatory journalism develops worldwide. It has become both a source of information and an informal social network. 

WHAT’S NEXT: Most research studies have only focused on limited periods, analyzing the “chilling effect” of surveillance (e.g., close to the NSA PRISM disclosures) and in a single language (English) to proclaim the death of CI on Wikipedia. Similar efforts around using Tor to provide anonymized edits also suffer from shifting policies on whether Wikipedia permits that or not, citing vandalism concerns. Ultimately, more long-term, cross-cultural (including collectivist societies in East Asia), and multi-lingual studies will highlight whether these effects are long-lasting. 

A big determinant of the success of CI in such settings is whether online communities can adapt rapidly to unpredictable future complex environments. As the author asks us to ponder, “[i]n a broader context, is the future of complex collective intelligence processes an open-source environment?” More mass surveillance will push back unless we deliberately parry. 

GO DEEPER: Some other research papers that dive into areas covered in this article: 

  1. Livingstone, R. M. (2010). Let’s leave the bias to the mainstream media: A Wikipedia community fighting for information neutrality. M/C Journal, 13(6).

  2. Konieczny, P. (2014). The day Wikipedia stood still: Wikipedia’s editors’ participation in the 2012 anti-SOPA protests as a case study of online organization empowering international and national political opportunity structures. Current Sociology, 62(7), 994-1016.

  3. Werbin, K. C. (2011). Spookipedia: intelligence, social media and biopolitics. Media, Culture & Society, 33(8), 1254-1265.

  4. Selwyn, N., & Gorard, S. (2016). Students' use of Wikipedia as an academic resource—Patterns of use and perceptions of usefulness. The Internet and Higher Education, 28, 28-34.

  5. Lih, A. (2004). Wikipedia as participatory journalism: Reliable sources? metrics for evaluating collaborative media as a news resource. Nature, 3(1), 1-31.

Abhishek Gupta

Founder and Principal Researcher, Montreal AI Ethics Institute

Director, Responsible AI, Boston Consulting Group (BCG)

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

Chair, Standards Working Group, Green Software Foundation

Author, AI Ethics Brief and State of AI Ethics Report

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Instagram effect in ChatGPT

Next
Next

Emergent Collective Intelligence from Massive-Agent Cooperation and Competition