Women Test LinkedIn's Algorithm by Changing Gender to Male in #WearthePants Experiment
By admin | Dec 13, 2025 | 6 min read
In November, a product strategist we'll refer to as Michelle (not her real name) logged into her LinkedIn account and changed her displayed gender to male. She was participating in an experiment called #WearthePants, where women tested the hypothesis that LinkedIn's updated algorithm might be biased against female users. For several months, some frequent LinkedIn users had reported noticeable declines in the engagement and reach of their posts on the career-focused platform. This followed a statement in August from the company's vice president of engineering, Tim Jurka, that the platform had "more recently" implemented large language models (LLMs) to help surface content useful to users.
Michelle noted that she and her husband typically receive a similar number of post impressions, despite her having a larger following. "The only significant variable was gender," she said. Another participant, founder Marilynn Joyner, also changed her profile gender. Having posted consistently on LinkedIn for two years, she observed a drop in her posts' visibility over the previous few months. Similar results were reported by Megan Cornish, Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and others.
LinkedIn has stated that its "algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed." The company added that "a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias" within the Feed. Experts in social algorithms agree that explicit sexism is likely not the cause, though implicit bias could be a factor.
As one expert noted, "The changing of one’s profile photo and name is just one such lever," explaining that the algorithm is also influenced by factors like how a user has historically interacted with content. "What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another. This is a more complicated problem than people assume," the expert said.
The #WearthePants experiment was initiated by two entrepreneurs, Cindy Gallop and Jane Evans. They asked two men to create and post identical content to their own, curious to see if gender was the reason many women were experiencing an engagement drop. Gallop and Evans have a combined following of over 150,000, compared to the two men, who had around 9,400 followers at the time. Gallop reported her post reached only 801 people, while the man posting the same content reached 10,408 people—more than 100% of his follower count.
Other women subsequently joined the experiment. Some, like Joyner, who uses LinkedIn to market her business, grew concerned. "I’d really love to see LinkedIn take accountability for any bias that may exist within its algorithm," Joyner said. However, LinkedIn, similar to other LLM-dependent search and social media platforms, provides very few details on how its content-selection models were trained. An expert pointed out that most such platforms "innately have embedded a white, male, Western-centric viewpoint" due to the demographics of the people who trained the models.
Researchers have found evidence of human biases like sexism and racism in popular LLM models because they are trained on human-generated content, and humans are often directly involved in post-training or reinforcement learning. Still, the specific implementation of AI systems by any individual company remains largely hidden within the "black box" of proprietary algorithms.
LinkedIn has stated that the #WearthePants experiment could not demonstrate gender bias against women. Jurka's August statement—reiterated in a November post by LinkedIn's Head of Responsible AI and Governance, Sakshi Jain—said its systems do not use demographic information as a signal for visibility. LinkedIn has been recognized for researching and adjusting its algorithm to try to provide a less biased user experience.
According to experts, unknown variables likely explain why some women saw increased impressions after changing their profile gender to male. For instance, participating in a viral trend can lead to an engagement boost; some accounts were posting for the first time in a while, and the algorithm may have rewarded that activity. Tone and writing style might also play a role.
Michelle, for example, said that during the week she posted as "Michael," she slightly adjusted her tone to a more simplistic and direct style, similar to how she writes for her husband. It was then that she reported a 200% jump in impressions and a 27% rise in engagements. She concluded the system was not "explicitly sexist" but seemed to treat communication styles commonly associated with women as "a proxy for lower value."
Stereotypically male writing styles are often perceived as more concise, while styles associated with women are imagined to be softer and more emotional. If an LLM is trained to boost writing that aligns with male stereotypes, that represents a subtle, implicit bias. As previously reported, researchers have determined that most LLMs are riddled with such biases.
Sarah Dean, an assistant professor of computer science at Cornell, noted that platforms like LinkedIn often use entire profiles, in addition to user behavior, when determining which content to promote. This includes the jobs listed on a user's profile and the types of content they typically engage with. "Someone’s demographics can affect 'both sides' of the algorithm - what they see and who sees what they post," Dean said.
A LinkedIn spokesperson stated, "We run ongoing tests to understand what helps people find the most relevant, timely content for their careers." The spokesperson added, "Member behavior also shapes the feed; what people click, save, and engage with changes daily, and what formats they like or don’t like. This behavior also naturally shapes what shows up in feeds alongside any updates from us."
Chad Johnson, a sales expert active on LinkedIn, described the algorithmic changes as deprioritizing likes, comments, and reposts. The LLM system "no longer cares how often you post or at what time of day," Johnson wrote in a post. "It cares whether your writing shows understanding, clarity, and value."
All these factors make it difficult to pinpoint the true cause of any #WearthePants results. Ultimately, many people simply dislike the algorithm's behavior.
Nevertheless, it appears many users, across genders, either do not like or do not understand LinkedIn's new algorithm—whatever its exact nature may be. Michelle noted that she and her husband are now lucky to see a few hundred impressions. "It’s demotivating for content creators with a large loyal following," she said. In contrast, another male user reported seeing his post impressions and reach increase by more than 100% over a similar period.
In the experience of one expert, who is Black, she believes posts about her personal experiences perform more poorly than posts related to her race. "If Black women only get interactions when they talk about black women but not when they talk about their particular expertise, then that’s a bias," she said. Researcher Sarah Dean believes the algorithm may simply be amplifying "whatever signals there already are." It could be rewarding certain posts not because of the writer's demographics, but because there is a longer history of engagement with similar content across the platform.
While the expert may have encountered another area of implicit bias, her anecdotal evidence is not enough to determine that with certainty. LinkedIn offered some insights into what content currently performs well. The company said its user base has grown, leading to a 15% year-over-year increase in posting and a 24% year-over-year rise in comments. "This means more competition in the feed," the company noted. Posts about professional insights and career lessons, industry news and analysis, and educational or informative content about work, business, and the economy are all performing well.
If anything, people are primarily confused. "I want transparency," Michelle said. However, given that content-picking algorithms have always been closely guarded secrets by the companies that create them, and because transparency can lead to users gaming the system, that is a significant request—one unlikely ever to be fully satisfied.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!