Escaping pilot purgatory in Generative AI

 
Don’t get trapped in the frothy piloting phase of Generative AI without a clear exit to turn it into something tangible. Winners from these investments will deliberately escape pilot purgatory through a disciplined approach.
 

PILOT WHAT?: McKinsey describes pilot purgatory as “pilot programs [that travel] at a snail’s pace – if at all.” They explore it in the context of the heady 2017-18 days of Internet of Things (IoT) projects, where fewer than 30% of those pilots went to scale. This result is akin to an MIT SMR-BCG study that found that only 10% of companies gained financial benefits from AI. We can apply some of those lessons to the explosion of interest and investment in Generative AI so that we don’t waste dollars and efforts on things that end up in purgatory with no real-world application or returns.

WHY IT MATTERS: Pilot purgatory drains capital, time, attention, resources, and motivation. And the last element is the most important. Your best and brightest employees are excited about Generative AI, and rightfully so! But, if their efforts languish between an idea and an MVP, with no hope of seeing the light of day, they will disengage. Or worse, they will leave for a competitor that allows them to move quickly from concept to execution at scale. Talent that is both great at the technical fundamentals and execution experience is highly valued and is fueling challengers like Anthropic, Cohere, OpenAI, and Character.AI.


CLARIFY GOALS: Early experimentation is about going beyond the boundaries and exploring the adjacent possible. Yet, it doesn’t hurt to have a little bit of discipline. Experimentation has, at its core, some assumptions that we’re trying to validate. For example, Generative AI can produce compelling, hyper-personalized marketing copy in a fraction of the time it takes my design team to do the same. Some assumptions underpin this hypothesis: you can access informative customer data, metrics on successful campaigns, image assets that respect copyright, and a few others. Making these assumptions explicit and validating them as a part of the experimentation cycle can chart a firm course toward a goal rather than haphazard strolls. 

GO FASTER: Connecting internally and externally to find partners with complementary skills and prebuilt capabilities can help prevent building everything from scratch. Instead of training your LLM, use off-the-shelf models such as text-davinci-003 from OpenAI or fine-tune models on HuggingFace. Or, solidify your use case and tinker with enterprise, business-oriented LLM offerings like the ones from Cohere. You don’t need to (and probably should not) go at it alone. Bringing others along with you from within the organization will help accelerate the exit from the pilot phase, particularly when you need sign-off to deploy and scale. 

MAKE LARGE IMPACT: Picking the right spot (functional area or business unit) within the organization can significantly affect whether you move out from pilot purgatory. Plug into mature technical infrastructure to leverage existing capabilities. Engage in creating products with evident and measurable outcomes. For example, building semantic search capabilities on top of your organization’s knowledge repository might be a laudable use case. It leans into your organization’s existing technical infrastructure of structured knowledge repositories. But, it is an internal-facing project with difficult-to-measure outcomes. Instead, choose a core product functionality, e.g., the quality of ads delivered in-context, and track whether it improves click-through rates and revenue generation. You could enhance each ad copy by generating personalized imagery and text to help it connect better with the recipient. This approach sets a clear pathway towards scaling the solution with attention from senior executives who can sign off to get this pilot out into the world.   

ITERATE QUICKLY: Starting simple is an essential ingredient for success. Don’t try to boil the ocean. For example, in improving the ad quality, begin with one facet of the ads: personalized images. A/B test to see how much more effective they are with the introduction of Generative AI. Once you have that pipeline solidified, apply the same approach to personalized text in the ads. Rinse and repeat, building on previous successes till you have a market-ready version.

AVOID PITFALLS: The lack of senior executive support is a frequent and sizeable stumbling block. It can be exciting to shroud Generative AI experiments in a skunkworks approach. But thrashing early by inviting in senior executives, taking in their feedback, and setting clear release goals will increase the likelihood that your pilot becomes a reality. If you don’t have that support, go out and secure it before investing too much effort.   

PILOT WITH CONFIDENCE: You need to be careful with Generative AI, especially in the face of active lawsuits, output quality, copyright infringement, and leakage of confidential information issues. They aren’t unresolvable. But we haven’t resolved them yet. Companies are hesitating in letting their employees use Generative AI because of these unresolved issues, such as the stern warning from Amazon to its employees. Adopting Responsible AI guidelines tailored for Generative AI can help allay some concerns. In particular, they highlight areas to avoid at all costs and mitigations for issues with tractable solutions. At the very least, it ensures an informed and conscious approach toward piloting and scaling solutions built using Generative AI. Failures in doing so can lead to wasted efforts when it comes time to exit pilot mode. 


WHAT’S NEXT: Don’t linger in ambiguity without a clear strategy to scale. Pilots offer the promise of unfettered experimentation and vistas of possibilities. They are only meaningful if you have a clear opportunity to transform the tinkering into something that will deliver business value, a strong determinant of whether it makes it into the real world. 

GO DEEPER: Innovation is as much an art as science.

  1. Ridley, M. (2020). How innovation works: And why it flourishes in freedom. New York: Harper.

  2. Generative AI - A Creative New World by Sequoia Capital

  3. Who owns the Generative AI platform? by Andreessen Horowitz

Abhishek Gupta

Founder and Principal Researcher, Montreal AI Ethics Institute

Director, Responsible AI, Boston Consulting Group (BCG)

Fellow, Augmented Collective Intelligence, BCG Henderson Institute

Chair, Standards Working Group, Green Software Foundation

Author, AI Ethics Brief and State of AI Ethics Report

https://www.linkedin.com/in/abhishekguptamcgill/
Previous
Previous

Writing problematic code with AI’s help

Next
Next

Instagram effect in ChatGPT