Instagram effect in ChatGPT

 
We only see the final, picture-perfect outputs from ChatGPT (and other Generative AI systems), which skews our understanding of its real capabilities and limitations. 
 

A lot is required to achieve those shareable trophies of taming ChatGPT into producing what you want: tinkering, rejected drafts, invocations of the right spells (we mean prompts!), and learning from observing Twitter threads and Reddit forums. But, those early efforts remain hidden, a kind of survivorship bias, and we are lulled into a false sense of confidence in these systems being all-powerful. 

SHOW YOUR WORK: In How To Decide, Annie Duke highlights outcome bias as a problematic approach to decision-making. Essentially, judging a decision based on the outcome rather than the rigor that went into the decision-making process sets us up for failures in the future when the same decision-making process leads to a bad outcome. Translating this to the world of Generative AI, spotlighting the journey towards getting good (and bad) outcomes can help develop discipline and rigor rather than casting a spell (read: crafting a prompt) and hoping for a magical outcome.

WHY IT MATTERS: Foundation models form the underpinning technology for systems like ChatGPT and Stable Diffusion and are readily accessible to most with an internet connection. Building a defensible moat to work that you’re doing with Generative AI requires something others aren’t doing now and will have trouble doing without too much effort. Truly taming Generative AI systems to consistently produce outputs that deliver value with minimally wasted effort (and dollars, for example, when querying Midjourney to generate images) requires putting in the repetitions and developing muscles for the process rather than just amassing a portfolio of outputs that emerged from lucky interactions with a stochastic system.

EARLY DAYS: Reviewing the outputs from these systems and peering into the opaque process that generated them isn’t easy. It is like copyediting and editorial review of written works. Developing a shared mental model with the author (of a piece submitted for editorial review) to better understand what they are trying to convey, how they are conveying it, and helping them articulate why they should convey it to their audience requires years of practice to be effective. Unpacking and disaggregating arguments within a submitted piece from an author help them better think through how to communicate more effectively with their readers. We can start to do the same with Generative AI systems, applying an editorial review process to deepen our understanding of the process that leads to the outputs rather than just getting enamored by the outputs. 

FALSE DEMOCRATIZATION: In Generative AI, those who generate consistent wins will have a rigorous and disciplined process bolstering their experiments, ultimately creating long-term defensible businesses and approaches that won’t be blown to bits as the next upgrade rolls onto the Generative AI landscape. This process will have to keep up with evolving capabilities, requiring a deeper understanding of where these systems excel in the value chain and what the human counterparts are to work well with these systems. While access to these systems has been democratized, users will get asymmetrical results with a bias towards those who are better at prompting and have more financial resources to experiment for longer. 

WHAT COMES NEXT: As the world eagerly holds its breath for the release of GPT-4 (an upgrade from GPT-3.5 that powers ChatGPT right now), it will not be magic. Browsing Instagram and getting scintillated by dazzling images and lives of the Internet-popular can lead to infeasible aspirations (and mental health issues). In contrast, their real lives are a series of failed attempts till they get the perfect picture. A new type of professional will see their star rise, someone who can marry together the creative art of coaxing Generative AI systems to spit out desired outputs with the disciplined science of using the fewest rounds of iterations to get there. 

GO DEEPER: Here are a few resources diving deeper into the ideas covered in this article:

  1. Jonker, C. M., Van Riemsdijk, M. B., & Vermeulen, B. (2011). Shared mental models: A conceptual analysis. In Coordination, Organizations, Institutions, and Norms in Agent Systems VI: COIN 2010 International Workshops, COIN@ AAMAS 2010, Toronto, Canada, May 2010, COIN@ MALLOW 2010, Lyon, France, August 2010, Revised Selected Papers (pp. 132-151). Springer Berlin Heidelberg.

  2. Sezer, O., Zhang, T., Gino, F., & Bazerman, M. H. (2016). Overcoming the outcome bias: Making intentions matter. Organizational Behavior and Human Decision Processes, 137, 13-26.

  3. Oppenlaender, J. (2022). A Taxonomy of Prompt Modifiers for Text-to-Image Generation. arXiv preprint arXiv:2204.13988.

Abhishek Gupta and Emily Dardaman

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Escaping pilot purgatory in Generative AI

Next
Next

Wikipedia’s Balancing Act: A Tool for Collective Intelligence or Mass Surveillance?