Banning ChatGPT Won’t Work Forever

 
Banning a tool reduces its use, but won’t stop it entirely - designing space for safe experimenting is better.
 

CHAT CANCELLED: JP Morgan Chase (JPMC) and Verizon are making waves in the Telegraph and the WSJ for restricting employees from using ChatGPT. Company representatives have yet to say whether this is a permanent ban or a time-saving tactic to develop a more thoughtful strategy. ChatGPT can be a powerful assistant to white-collar workers. But, it can also generate misinformation, commonly called the “hallucination problem.” Recently, ChatGPT errors have humiliated publishers and tech giants alike. Bans generally don’t prevent use, but they do reduce it.

  • Is JPMC being reactionary or prudent? Only time will tell.

  • In the meantime, let’s explore alternatives to a total ban by being careful with ChatGPT.

  • Also - don’t use ChatGPT to write a consolation note to students after a mass shooting.


NO POLICY = BAD POLICY: When a powerful new technology comes to town, the worst thing an organization can do is ignore it. The second worst is blindly embracing it and being surprised by the consequences. So what does a well-designed policy response to ChatGPT look like? 

  • A good policy has clear GO and NO-GO guidance. 

  • And it’s backed up by a multi-disciplinary committee to review use cases.

  • The committee should also be agile to adjust policy based on the ecosystem's technical, political, and legal changes.

GRAB THE MEGAPHONE: Building policy is hard. Ensuring everyone learns and follows the policy at all levels and locations is even harder. Leaders must establish a process for communicating the “go/no-go” policy and its updates to the organization. We saw how updates to guidance from health authories during COVID-19 eroded trust in the overall mechanism due to poor communication.

  • The message must come from the top and be reinforced by department heads and managers.

  • Don’t underestimate the difficulty of communication. Attention is scattered and hard to grab.

SOME FUN AND GAMES: Responsible use policies don’t need to dampen excitement or innovation. Far from it! They encourage bold innovation within well-established guardrails lending confidence to otherwise hesitant and furtive approaches to tinkering with this technology. 

  • Usage policies with a poor design and implementation lead to shadow development and deployment (unaccounted and unreported experimentation), posing even more harm. 

  • Organizations can design sandboxes for teams to experiment with ChatGPT and its successors. 

  • A sandbox must include infrastructure to monitor what goes on inside it, like a front door that gates access via intermediating API requests or sanitizing sensitive information, for example, by using Named Entity Recognition (NER).


WHAT’S NEXT: The next hurdle to overcome in Generative AI pilot purgatory will be addressing copyright infringement, hallucinations, data confidentiality, and output quality concerns. Banning use in the interim is a stop-gap measure because the exuberance is too high to prohibit experimentation. Allowing meaningful experimentation can benefit organizations and facilitate the massive cooperation needed to manage powerful AI systems.

GO DEEPER: Here are some more resources that can help you in your journey to create a safe environment to explore Generative AI tools: 

  • Ringe, W. G., & Christopher, R. U. O. F. (2020). Regulating Fintech in the EU: the Case for a Guided Sandbox. European Journal of Risk Regulation, 11(3), 604-629.

  • Miltiadou, D., Pitsios, S., Spyropoulos, D., Alexandrou, D., Lampathaki, F., Messina, D., & Perakis, K. (2021, January). A Secure Experimentation Sandbox for the design and execution of trusted and secure analytics in the aviation domain. In Security and Privacy in New Computing Environments: Third EAI International Conference, SPNCE 2020, Lyngby, Denmark, August 6-7, 2020, Proceedings (pp. 120-134). Cham: Springer International Publishing.

  • Engler, A. (2023, February 21). Early thoughts on regulating generative AI like ChatGPT. Brookings. Retrieved February 26, 2023, from https://www.brookings.edu/blog/techtank/2023/02/21/early-thoughts-on-regulating-generative-ai-like-chatgpt/  

  • Karnofsky, H. (2022, December 22). Racing through a minefield: The AI deployment problem. Cold Takes. Retrieved February 26, 2023, from https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/#global-monitoring 

Abhishek Gupta and Emily Dardaman

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Bing’s threats are a warning shot

Next
Next

Stop Ignoring Your Stakeholders