If you’ve been paying attention, a familiar story is already starting to settle in.
You’ve probably seen the headline: as generative AI reshapes how work gets done, women are falling behind. Adoption is lower. Engagement is lower. And if that gap holds, it compounds.
The data behind that claim is real. A recent global analysis* of more than 140,000 individuals across 18 studies found that women are roughly 20% less likely than men to use generative AI tools. The pattern shows up across countries, industries, and roles.
It’s a clean headline. It travels well. But like a lot of headlines, it holds up better at a distance.
When we analyzed Forté’s most recent MBA Outcomes data, a different pattern showed up.
In the context of job search, women weren’t using AI less. They were using it more.
When we asked MBA alumni how they were using AI in their job search, women reported engaging across more activities than men, including resume optimization, cover letters, interview preparation, and follow-up communication. Women weren’t just trying it out. They were using AI in moments that carry real stakes.
And the differences didn’t stop at how often AI was used. They showed up in how it was used.
Men were more likely to use AI to automate parts of the process, like streamlining applications, generating responses, and moving more quickly from one step to the next. Women were more likely to use AI to refine and strengthen what they had already written, tightening language, improving clarity, and making sure everything landed the way they intended.
Those broader studies are picking up something real. But they’re mostly measuring general use.
Our data is picking up something else entirely. And if you look more closely, that same pattern is already there in the broader research. Among more educated groups, the gap narrows, and in some cases disappears.
Those broader gaps do matter, particularly in contexts where access and familiarity are not evenly distributed. But they aren’t the whole story.
The pattern changes as soon as you stop reading the data through the headline.
This isn’t a story about who is or isn’t using AI. Both are. The difference shows up in how it’s being used.

Neither approach is inherently better. But they reflect different priorities in how people choose to work with the technology. "
One candidate uses AI to refine what they have already created, working through a resume line by line, adjusting language, tightening phrasing, and making sure each point lands the way they intend. The cover letter is tailored, the follow-up is thoughtful, and everything feels consistent from one interaction to the next.
Another uses AI to move more quickly through the process. Applications are generated, responses are drafted and adjusted, and the overall pace starts to pick up as more ground gets covered in less time.
Both are using AI well. They just leave different impressions.
One can feel more polished, deliberate, and highly tailored. The other can feel faster, more scalable, and built for momentum. Neither approach is inherently better. But they reflect different priorities in how people choose to work with the technology.

Most headlines and conversations talk about AI as if the question is who is using it. That is the easiest part to measure and the least interesting to understand."
That difference is not accidental. McKinsey research on AI and work points to the same shift. As tools take on more of the execution, some work becomes faster and more scalable, while other work depends more heavily on judgment, framing, and interpretation. The distinction is no longer just whether people use AI, but what they choose to optimize for when they do.
That shift is easy to miss if the focus stays on whether someone is using the tool at all. What starts to show up instead is how people are choosing to use it, and what they are optimizing for when they do. In some cases, the emphasis is on precision and control. In others, it is on speed and scale.
Over time, those choices start to paint a picture.
Most headlines and conversations talk about AI as if the question is who is using it. That is the easiest part to measure and the least interesting to understand.
The more telling difference is how it gets used, and the assumptions those choices can trigger about how someone is likely to operate at work.
That’s just not the part we usually measure.