Skip to main content Skip to footer
Cognizant logo


April 03, 2024

Gen AI spurs a rethink of necessary workforce skills

Generative AI will boost the importance of critical thinking and meaningful human interaction—demanding a rethink of the workforce skills of the future.


We talk with our clients every day about the ramifications of generative AI. Many conversations focus on leveraging generative AI for productivity gains and—for those more confident in their AI abilities—using it for customer-facing interactions to drive sales and service.

However, there’s also an opportunity to examine its impact on people: the new skills they’ll need and the ways it will shape our interactions with each other. As our research with Oxford Economics shows, nearly all jobs will be affected by generative AI in 10 years’ time, and as many as 46% of businesses will adopt the technology in the next decade. The ramifications for the workforce—and for needed workforce skills—will be enormous.

Consider that in the most mainstream use case, generative AI acts as an assistant—curating and analyzing a vast amount of knowledge to share insights and generate recommendations. An example is a sidekick to a call center agent. Typically, agents have to put clients on a lengthy hold while they search for information to answer a question; with generative AI serving up quick summaries and recommendations, that wait time is reduced.

But, as we all have learned, generative AI is fallible. It can hallucinate feelings, make up information, respond with bias and be tricked into sharing information it shouldn’t. It requires a second set of eyes to elevate its thinking or guide it back on track—much like a new graduate starting their first foray into the professional world. Or as the Board of Innovation puts it, generative AI will turn many of us from creators into editors.

In this way, these AI tools will act as our automated digital interns, requiring humans—even those entry-level new grads—to become effective “editors” or managers.

To manage the shift, enterprises will need to prioritize change management and employee education much more so than they have in the past. Here are a few of the human skills that will need to be honed in the workforce as AI solutions become more pervasive in the enabling digital ecosystem. 

Critical thinking: a key workforce skill for prompt engineering

The rise of generative AI has caused a land rush of prompt-engineering tips and advice, typically comprising formulas or patterns of questions aimed at getting the best results from large language models. Engineering a prompt essentially means workers are learning to manage their AI interns (such as Microsoft 365 Copilot, Miro’s AI Assist and developer productivity tools). Doing so requires the same type of critical thinking and problem-solving skills managers use to effectively elevate work products.

To engage with AI, users need to break down a question, or prompt, into its component parts: the sources from which information might come, the lens or persona through which it should be ingested, and the parameters that should shape the output.

Once a response is delivered, the user will need to create additional prompts to refine or improve the quality of the response. This kind of critical thinking is now table stakes for new graduates, who lack the experience to instinctively know when a response is enough (i.e., “what good looks like”).

As we think about future skills and how to cultivate them, the following questions come to mind:

  • How will curriculums evolve to build these types of AI managerial skills that graduates can bring to the workplace?

  • How can immersive experiences augment the current standard of testing to reinforce the new skills of the future, like critical thinking and applied problem-solving?

  • How can apprenticeship programs be structured to hone critical thinking—typically derived from experience—faster?

Communication: a valued skillset at risk

Repeated exposure to tools that require prompt engineering could have consequences. As we learn how to provide directives to our AI interns, we will build muscle memory for that kind of interaction—and unintentionally limit ourselves to command-and-response exchanges.

Without practice or more exposure to other types of communication structures and tones, we risk losing the ability to relate, debate and converse with each other. Why does this matter? Because with the brevity and impersonal interactions with AI, it will be difficult to build the psychological safety and emotional trust critical to effective team collaboration. As Microsoft CEO Satya Nadella says, empathy—widely considered a “soft skill”—is actually “the hardest skill to learn.”

This raises questions for business leaders seeking to build generative AI capabilities into their organizations:

  • How can businesses re-establish social norms to avoid negative impacts on rapport and collaboration between team members?

  • Could responsible AI practices extend to how employees interface with AI tools? (This is especially interesting because some say they’ve gotten better results from OpenAI’s ChatGPT simply by being polite.)

  • When building new AI operating models, how can teams incorporate new learning objectives alongside their AI rollouts to proactively address impacts?

Decision-making: the nuances of implementing gen AI solutions

With businesses intent on driving efficiency through digital technology, it is often assumed that any task can be performed faster and better that way. At the same time, many business leaders we speak with are assessing which tasks are lower risk and more easily executed by AI, and which sit outside AI’s current capabilities—what Cognizant CEO Ravi Kumar refers to as the “jagged technological frontier.”

Similarly, the pressure of meeting productivity targets needs to be balanced against both the promise and limits of AI.


For instance, many creative or relationship-oriented roles require people to contextualize information to the specifics of their organization, or to understand cultural or regional nuances. Making those connections can take time. But as AI enablement becomes more pervasive, throughput expectations will accelerate. With the push for high-quality output delivered quickly to achieve productivity goals, businesses risk underestimating the necessity of human input.

While organizations explore generative AI use cases, questions arise:

  • How can leaders set reasonable expectations for productivity when applying AI to complex tasks requiring human intervention?

  • How can organizations balance tradeoffs in realizing operational benefits (e.g., through staff reduction) with the unknown as teams experiment with new technologies like generative AI?

  • How can leaders be trained to exercise patience in certain scenarios as part of their decision-making?

Moving forward

Generative AI will boost the importance of critical thinking and communication skills—demanding that we rethink workforce skills of the future.

As we continue to work with and build AI solutions, we encourage our clients to view the effort through a combination of human-centered design and traditional business rationale. We also advise them to ask questions similar to those outlined here.

While there are no simple answers, examining changes in this way will foster a broader appreciation of generative AI’s ripple effect—driving better leadership decisions as we plan for the needs of the future workforce, both AI and human.
 



Stephanie Wan

Head of Experience Strategy

Headshot of Digitally Cognizant author Stephanie Wan

Stephanie Wan is the Head of Experience Strategy at Cognizant and a Partner at Idea Couture. She helps clients craft next-gen experiences for growth by connecting digital strategies into build.

Stephanie.Wan@cognizant.com



Latest posts

Experiences that feel intuitive

To learn more, visit the Experience Services section of our website.

A woman interacting with a kiosk

Related posts

Subscribe for more and stay relevant

The Modern Business newsletter delivers monthly insights to help your business adapt, evolve, and respond—as if on intuition