Is Your GenAI Strategy Fit for Purpose?
- Matt McCulloch
- Feb 12
- 3 min read
The use of Generative Artificial intelligence (GenAI) is exploding across industries. It is being embedded into as many companies and products as possible, it seems. However, as Baier et al argue, one must carefully consider how existing structures and processes may need to be revised to optimally leverage GenAI before greenlighting this new technology. In other words, GenAI should be implemented in a way that considers its value and purpose.
For example, consider the use of GenAI to generate internal reports for a team at a large tech company, similar to Baier’s example within their own firm. In this scenario, imagine moving report generation from a human role to a GenAI role without changing responsibilities or expectations. In this scenario, the human is not responsible for adding context or curating the report. The GenAI report may fail to communicate the same key takeaways or visualize the data in a way that the human would have found most appropriate.
In contrast to this approach, a more intentional approach has several benefits. The “Design for Dialogue” method emphasizes redesigning processes to incorporate GenAI with the human-computer interaction center stage. (This concept is similar to the Informatics world's Friedman's Theorem). In this method, humans set out to understand tasks and workflow points where GenAI could be most effective in empowering the human’s needs, and implementing accordingly, instead of strictly replacing the human. This approach reaps efficiency benefits: as GenAI becomes pervasive throughout industries, those who implement thoughtfully will likely achieve improved organizational efficiency and capabilities, which are two of few remaining ways to maintain a competitive advantage. As well, careful “task analysis” ensures that tasks requiring human expertise remain in their hands, allowing more focus on developing the “expert role” and offloading tasks that are non-contributory to this end.

When GenAI is not fit to its purpose in this manner, there are several consequences. Without human interaction in report generation, GenAI may omit important data, include irrelevant data, or fail to visualize data in a way that communicates key takeaways. Ultimately, use in this way decreases workplace efficiency and fails to capture the benefits of what is likely an expensive and resource-intensive tool. As well, redesigning processes to be handled by either a human or technology without considering the possibilities of the interaction leads to a stagnant tool. A human using GenAI to help create a report can gain insights from the interaction to improve the next round, whereas a human using a report spit out by GenAI cannot iterate. Instilling feedback and continuous improvement mechanisms to evaluate new organizational designs is critical to ensuring the intended impact was delivered.
As I've said before, AI is now seen as a ubiquitously superior hammer and subsequently, every healthcare problem looks like a nail. Shoving an LLM into various healthcare products and workflows is not inherently superior to the human alternative: the optimal fix likely involves careful consideration of how they work together.
References
Baier P, DeLallo D & Sviokla JJ. ‘Your organization isn’t designed to work with GenAI.’ HBR, 26 Feb 2024.
Frost A & Purdy L. ‘Note on organizational structure and design.’ Ivey Publishing, Nov. 2017, Richard Ivey School of Business Foundation.
Beeson, J. ‘Five questions every leader should ask about organizational design.’ HBR, Jan. 2014.
Johnson, Kevin & Wei, Wei-Qi & Weeraratne, Dilhan & Frisse, Mark & Misulis, Karl & Rhee, Kyu & Zhao, Juan & Snowdon, Jane. (2020). Precision Medicine, AI, and the Future of Personalized Health Care. Clinical and translational science. 14. 10.1111/cts.12884.


Comments