Headstart, a platform that leverages data science to help companies reduce unconscious bias in the hiring process, has raised $7 million in a seed round of funding led by AI-focused Silicon Valley VC firm FoundersX, with participation from Founders Factory.
Founded out of London in 2017, Headstart is one of a growing number of startups that promise to help companies increase their diversity during recruitment drives. This is achieved through combining machine learning with myriad data sources to find the best candidates based on specific objective criteria.
“The machine — the algorithms and models — does this without emotion, fatigue, or overt subjective, conscious or subconscious opinion or feeling,” Headstart cofounder and chairman Nicholas Sherekdemain told VentureBeat. “Unlike a human.”
In terms of the types of data its platform meshes, Headstart taps information from its client companies including the job description, current employee data (such as CVs, education, and psychometric data). This internal data is reviewed for built-in bias, so if there is a clear leaning toward a specific demographic this can be addressed in subsequent hiring campaigns. The Headstart platform also gathers and analyzes publicly available data from across the web, including job descriptions and roles, as well as demographic and social-oriented data such as school league tables and free school meals data.
“We use this data to determine if any individual has had any obvious social disadvantage and have possibly outperformed their social norm group,” Sherekdemain added.
Then, of course, there is the all-important candidate data which is garnered at the point they apply for an advertised position online. Headstart’s technology essentially sits behind the “apply” button on their clients’ digital properties, and it’s at this point that companies are given the best matches based on data gleaned from the applicant’s CV, psychometric assessments, and any other tools that are used through the screening process. “[This] allows us to evaluate each candidate algorithmically, with a 360 picture of their suitability, ensuring everyone has a fair experience,” Sherekdemain added.
The startup already claims some big-name clients, including financial services giant Lazard and Accenture, which Headstart said saw a 5% increase in female hires and 2.5% increase in black and ethnic minorities after using its platform.
It is worth noting here that reducing bias is only part of the selling point here. More broadly, the Headstart platform is designed to expedite the candidate screening process, ensure that each application is considered equally, and reduce the time-to-hire by up to 70%.
Additionally, Headstart can also give companies deep insights and analytics into their hiring practices, so they can measure existing biases and how this evolves over time, as well as establish at which stage of the interview process specific applicant-types drop-off.
Prior to now, Headstart had raised $500,000 and a further $120,000 as a graduate of Y Combinator, and with another $7 million in the bank it said that it is now looking to expand internationally — an endeavor that is already underway given that Accenture has signed a deal to use the Headstart platform in other markets around the world.
“When we came to market two years ago, we were probably the only technology company talking about fairness and diversity,” said Headstart CEO Gareth Jones. “For me this represents an investment in diversity, not just our company. This latest round will allow us to grow our capability in our core markets, leveling the playing field and breaking the cycle of exclusion that still chronically prevalent in the world of work.”
There are numerous other startups out there that are leveraging AI and automation to streamline the recruitment process, such as New York-based Fetcher which uses similar data-crunching techniques to proactively headhunt new candidates, while Pymetrics leverages AI as part of its standalone platform which companies use to carry out assessments built on neuroscience games.
Headstart is pitching its technology more as the underlying data architecture that “amalgamates candidate information and interprets it algorithmically,” according to Sherekdemain. “Our USP is the ability to take all of this data, and rather than just returning a pass / fail or yes / no, we can score them with a percentage suitability as a blend of all of our data inputs.”
Although algorithms can remove some human biases from many traditional admin processes, we have seen a growing number of scenarios where the algorithms themselves demonstrate biases — humans, after all, create the algorithms. By way of example, just last week news emerged that Goldman Sachs was to be investigated over alleged gender discrimination regarding credit limits issued in relation to Apple Card.
Ultimately, it’s much harder for an algorithm to explain why it arrived at a certain decision, than if a human was at the helm calling the shots. And that is why much of the argument today seems to linger around what the better option is — biased algorithms that can’t explain themselves, or biased humans that can at least provide some rationale for why a decision was reached.
Elsewhere, Amazon previously scrapped an AI-powered recruitment tool it had been working on, specifically because it was biased against women. The experimental tool was trained to vet applications for technical roles by observing patterns in successful resumes dating back a decade, however most of those applications had come from men. So in effect Amazon had been teaching its machine learning system to favor male candidates.
But specific to Headstart, it’s worth stressing that candidates aren’t actually hired by machines — humans are making all the final decisions. It’s merely a vetting tool that helps remove some bias — up to 20%, according to Headstart — while also speeding up the recruitment process.
“There is a lot of concern around technology and it’s ability to remove bias,” Sherekdemain said. “And rightly so. Yet we talk about it as though the human recruitment selection process is pure, robust and bias free. It isn’t. It’s chronically biased.”
This human bias is compounded when a particular job receives hundreds — or even thousands — of applications, and it falls on one or two people to sift through these applications with a fine-tooth comb to find the best candidates. If there is one thing that algorithms can’t be accused of, it’s being lazy or easily-exhausted.
“The technology, used appropriately, can expose and largely eliminate this bias,” Sherekdemain continued, “ssimply because it doesn’t get to the 50th CV it’s seen that day and then skip through the next 100 because they are tired and need to get a shortlist to the hiring manager, and a bunch of the first 50 were ‘good enough.’”
Sherekdemain concedes that meshing machine learning with data crunching isn’t perfect, but it does address many of the inherent problems that dog the exhaustive, resource-intensive hiring process. And it should also improve over time.
“The machine doesn’t consider the candidate’s name and, subconsciously, degrade that applicant’s value because of unconscious bias towards ethnic origin or gender,” Sherekdemain added. “Does that mean the machine is perfect? No. Creating a reliable data model and algorithm is an iterative process. It takes time to train, execute, review, and retrain the models in order to improve accuracy. And to flag things that could lead to bias — such as criteria that might lead the model to favor a particular gender type for example. As happened in the Amazon case.