TLDR
- Junior employees are not as good at generative AI as you’d think, so having them coach more senior colleagues is less effective and may even introduce new risks, according to a new study from Ivy League professors and BCG.
- Generative AI is advancing so rapidly, and its capabilities so broad, that non-technical younger employees don’t have the training or experience to be experts at it.
- The study’s findings debunk the assumption of a junior-to-senior knowledge transfer that has characterized past technological advances for decades.
Junior employees are often tapped to coach their more senior colleagues about new technologies, a knowledge transfer that has been largely the norm in corporate America for decades.
But a recent study from MIT, Harvard, Wharton and University of Warwick in collaboration with the Boston Consulting Group is debunking this long-held view. It turns out that when it comes to emerging technologies, junior employees may not be the best teachers of their more senior colleagues.
Unlike past technological breakthroughs, generative AI is advancing so rapidly that young employees do not have the training or experience to coach seniors well, since they themselves lack expert knowledge. Moreover, they also give solutions that don’t work – and may even bring in additional risks.
The study’s results came as a surprise to the authors.
“We expected that juniors would be a great source of expertise for senior professionals trying to learn to effectively use generative AI,” said Katherine Kellogg, the study’s lead author and a business administration professor at MIT’s Sloan School of Management, in an interview with The AI Innovator. The other co-authors are Hila Lifshitz, Steven Randazzo, Ethan Mollick, Fabrizio Dell’Acqua, Edward McFowland III, Francois Candelon and Karim Lakhani.
“Senior professionals often learn from junior professionals how to effectively use new technologies, because junior professionals are typically more willing to perform lower level tasks to learn new skills,” she said. Younger employees also don’t risk losing their mandate to lead if they show their lack of expertise – all while being more willing to learn new things outside current systems and practices.
Junior professionals are likely to engage in “novice risk work” tactics grounded in a superficial understanding of generative AI’s capabilities, offering only surface-level solutions.
However, generative AI’s broad capabilities and exponential speed of development is limiting the ability of junior employees to coach senior employees, since they can’t keep up themselves. As such, she said, junior professionals are likely to engage in “novice risk work” tactics grounded in a superficial understanding of generative AI’s capabilities, offering only surface-level solutions.
For example, to solve the problem of generative AI models like ChatGPT making up responses – also called hallucinations – younger workers recommended in the study that seniors standardize the prompts in some way. However, using the same prompt does not guarantee consistency. There is a random factor in generative AI models that makes them answer the same query in different ways – the same capability that enables them to be wildly creative.
In contrast, generative AI experts recommended starkly different solutions, such as finding use cases where error risks are acceptable, according to the study. Also, experts suggested independently testing generative AI’s reliability in executing subtasks, and create evaluations for each subtask.
Looking at younger workers’ solutions, these show a superficial understanding of generative AI and how it works. They recommended changes at the human tasks level, instead of at the systemic level, in layers of neural networks, which would be more effective.
Senior leaders at organizations will increasingly face situations in which they have to deal with emerging technologies.
However, Kellogg said one cannot blame the junior employees for their lack of expertise. She said that since “junior professionals had just gained access to an emerging technology that had a high level of uncertainty in its use, because it had wide-ranging capabilities and was exponentially changing, experts will be better positioned to control novel risks than junior professionals.”
Still, the risk is present because “junior professionals may play an outsized role in identifying and controlling risk around uncertain emerging technologies like GenAI in organizations. This is because senior professionals often look to junior professionals to upskill senior professionals in the use of emerging technologies,” Kellogg said.
The study is timely because senior leaders at organizations will increasingly face situations in which they have to deal with emerging technologies, the authors wrote.
For the study, the authors interviewed 78 junior consultants at the Boston Consulting Group from July to August 2023. They were asked to solve a business problem using GPT-4, the large language model underpinning ChatGPT. These junior consultants were not technical experts.
What companies can do instead
If juniors can’t be relied on to coach more senior people on generative AI, what should companies do? Kellogg recommends the following actions:
- Companies can provide employees with training and awareness programs to learn how the emerging technology works, identify its specific risks, and how end-users can help control these risks.
For example, leaders can train professionals how to craft effective prompts, how to interpret the generated outputs, and how to cross-reference outputs using reliable sources, their own domain expertise and knowledge of firm values. - Leaders can give employees access to experts in emerging technologies who can address their questions, provide guidance, and offer best practices for using these technologies.
- Leaders can form a steering group that will make decisions on managing the risks associated with emerging technologies. For example, the group will be responsible for determining acceptable use cases for generative AI, adjusting them as necessary based on AI learning and new models and tools.
- On a systemic level, leaders can identify specific harms such as inaccuracies and change the system design to address these harms. For example, to avoid hallucinations, they can implement automatic monitoring with a second system that provides links to sources. Also, they can use techniques like RAG (Retrieval Augmented Generation) to restrict models to pull from a smaller dataset.
- Leaders can also make changes at the firm and ecosystem levels.
- Communicate to users the intended conditions under which GenAI can be used reliably.
- Provide co-audit tools.
- Create a prompt library.
- Continually assess the alignment of LLMs vis-à-vis evaluation metrics.
Company leaders can also require their LLM vendors to “assess the representativeness, robustness, and quality of their data sources, and implement mechanisms that allow LLMs to continually learn from new data, in order to capture recent developments and trends,” Kellogg said.
They can also require LLM vendors to report on the provenance and curation of the training data, the model’s performance metrics, and any incidents and mitigation strategies concerning harmful content.
Be First to Comment