How We Built a Recommendation Algorithm That Actually Works
Most workforce platforms blend relevance with popularity into one meaningless score. We rebuilt ours from scratch around three independent axes — and made the whole thing transparent.
Get this in your inbox every week. Join 26,000+ subscribers.
Subscribe →How We Built a Recommendation Algorithm That Actually Works
Search "EOR" on most workforce platforms and you'll get one of two results: either the company that paid the most for placement, or a confusing ranking that blends relevance with popularity until neither means anything.
We had this exact problem. A user would search "EOR" and a solution that IS an EOR — literally has it as a core capability — would show a 45% match. Why? Because our old algorithm blended relevance signals with quality signals into one score. A perfect-match solution got penalized because it was newer to the platform and didn't have enough badges yet.
That's backwards. If you search for an EOR and a company is literally an EOR, it should show 99% match. Full stop.
So we rebuilt the entire recommendation algorithm around one principle: separate what something IS from how GOOD it is.
The problem with blended scores
Most recommendation systems — in workforce tech and beyond — make the same mistake. They take a dozen signals and mash them into one number. Relevance, popularity, reviews, badges, customer count, semantic similarity — all blended into a single "score."
The result is a number that means nothing. Is a "72% match" telling you the solution is somewhat relevant? Somewhat good? Somewhat popular? All three? You can't tell. And when you can't tell, you don't trust it.
Worse, it creates perverse outcomes. A brand-new EOR startup with zero reviews shows a low match for "EOR" — even though it is, by definition, a perfect match. Meanwhile, a well-reviewed staffing firm that doesn't offer EOR services at all shows a higher score because its review volume inflates the number.
That's not a recommendation. That's noise.
Three independent axes
We rebuilt the algorithm around three completely independent scores. Each one answers a different question, and none of them influence each other.
Match % — "How relevant is this to your search?"
This is the core recommendation score. It tells you whether a solution does the thing you searched for. If you search "EOR" and a solution has EOR as a capability, it's a 99% match. If it doesn't have EOR but is semantically related (say, a payroll company), it might be a 60% match. Match percentage is driven purely by relevance — what the solution IS and what it DOES.
HC Score — "How strong is this solution objectively?"
This is our merit score, calculated from verified customer feedback, business cases, and credibility signals. It answers whether a solution is established, trusted, and well-regarded. A solution's HC Score never changes based on what you search for — it's an absolute measure of earned credibility.
My Score — "How well does this fit my specific needs?"
This is personal. Buyers set their own weighted preferences — maybe compliance matters more than price, or maybe they need healthcare industry experience. My Score reflects their rubric, not ours.
The critical design decision: none of these scores touch each other. A solution can be a 99% match with a low HC Score (perfectly relevant, just new to the platform). Or a 60% match with a high HC Score (great company, just not what you searched for). Both signals are honest.
How Match % actually works
When you search, the algorithm processes your query through two tiers.
Tier 1 — Core Relevance
This is the foundation. Does the solution do the thing you asked for?
If a solution has the exact capability you searched for — "EOR" and the solution has EOR listed as a capability — it gets a 99% match at Tier 1. No exceptions, no dilution. Structured data is king. If we know a solution is an EOR because it's tagged as one, we don't need an AI model to guess.
For queries where we don't have an exact match, we fall back to semantic similarity — vector embeddings that understand "employer of record" and "EOR" are the same concept, or that "global hiring" is related to "EOR" even if it's not identical.
Tier 2 — Contextual Refinement
This is where it gets interesting. After the initial search, users can refine by industry, region, compliance capabilities, governance features, and more. Each refinement adjusts the match percentage based on overlap — how much of what the user wants does the solution actually have?
The formula: match% = tier1 x (0.7 + 0.3 x tier2)
Tier 2 can reduce a match but never inflate it. A solution that perfectly matches the search query but doesn't serve healthcare drops from 99% to about 69% when the user filters by healthcare. But a solution that doesn't match the search query at all can't use healthcare specialization to inflate its score.
Core relevance is always dominant.
Where the magic happens
The real power isn't in the formula — it's in what happens when you start refining.
Say you search "EOR." You get a dozen results, all at 99% match, because they all genuinely offer EOR services. The market leaders appear first because their HC Scores — earned through verified customer feedback and documented business cases — are highest. That's fair.
Now you click "Industry: Healthcare." Suddenly a specialist you've never heard of jumps to the top. Not because we picked favorites. Because that company serves healthcare and the others don't. The algorithm honestly tells you: this is the most relevant result for YOUR specific need.
That's the moment we designed for. The specialist that would have been invisible in a blended system — buried behind bigger brands with more reviews — surfaces exactly when a buyer needs them.
And this is why the separation matters. The market leader doesn't disappear. Their match percentage drops (honestly — they don't specialize in healthcare), but their HC Score is still visible on the card. If the buyer decides "I'd rather go with the most established option even if it's not a healthcare specialist," they click "Sort by HC Score" and the market leader moves right back to position one.
The buyer decides. Not us.
The taxonomy problem we solved
Traditional search systems force refinement into rigid categories. You search, then you pick from a dropdown: Industry, Region, Company Size. But what if your refinement doesn't fit those buckets?
A buyer told us recently: "I need an EOR that specializes in talent marketplaces." Talent marketplaces isn't an industry. It's a category — a type of business model. A rigid system would have nowhere to put it. The search would return nothing. The buyer would think "this doesn't work" and leave.
Our system checks every refinement against the solution's entire attribute graph — categories, capabilities, features, industries, description text, and embedding similarity. "Talent marketplaces" matches against categories. "Healthcare" matches against industries. "Workers' compensation" matches against compliance attributes. The system finds the right bucket automatically.
You can even type natural language: "looking for an FMS that doesn't have an EOR." The algorithm parses it into structured intent — want FMS, exclude EOR — and handles the negation. Embeddings alone can't do negation. You need structure.
No forced taxonomies. No dead-end searches. The algorithm meets the user where they are.
Why we don't hide the algorithm
Most platforms treat their algorithm as a black box. We think that's wrong — especially in a market where the entire value proposition is trust.
If a buyer can't understand why a solution ranks where it ranks, they won't trust the results. If a solution can't understand how to improve their ranking, they'll assume it's pay-to-play (and in most markets, they'd be right).
So we made the whole thing transparent:
Match % is transparent. You can see exactly why a solution scored what it scored. Exact capability match? 99%. Filtered by an industry they don't serve? Score drops. There's no mystery.
HC Score is transparent. We published the entire methodology — every merit activity, every point value, every credibility multiplier. A verified thumbs up from an enterprise VP is worth more than an anonymous thumbs up. We tell you exactly how much more.
Sorting is user-controlled. We don't decide whether relevance or quality matters more. The user picks. Default sort is relevance with HC Score as a tiebreaker. One click switches to HC Score first, or My Score first.
What this means for solutions
If you're a workforce solution — especially a specialist — here's what matters:
Your relevance score is honest. If you're an EOR, you'll show 99% match for "EOR" whether you have zero reviews or a thousand. New solutions aren't penalized on relevance.
Your HC Score is earned. Verified customer feedback, documented business cases, and enterprise endorsements drive your merit score. Not marketing spend. Not conference sponsorships. Not pay-to-play analyst placements.
Specialization is rewarded. If you're the best EOR for healthcare, you'll rank first when someone searches "EOR" and filters by healthcare — even if you're a 20-person firm. The algorithm surfaces the best fit, not the biggest brand. That's how a boutique pharma staffing company can outrank a global brand when the buyer's context calls for it.
What this means for buyers
Search results you can trust. When a solution shows 99% match, it means it does the thing you searched for. When it shows 65%, it means it's related but not a direct match. The number means something.
You control the ranking. Sort by relevance to find the best fit. Sort by HC Score to find the most established option. Sort by My Score to apply your own criteria. Three lenses, not one blended number.
Refinement that actually works. Filter by industry, region, compliance, governance — each one honestly adjusts the match percentage. You can see exactly which solutions fit every dimension of your need, and which ones are close but missing something.
The bottom line
Most recommendation algorithms in our industry are designed to be opaque — because opacity allows monetization. When you can't see why something ranks where it ranks, the platform can sell that position.
We built ours to be transparent — because the only way to build trust in a $626 billion market is to show your work. Every score is earned. Every ranking is explainable. Every user controls their own view.
983 solutions. Three independent scores. Zero pay-to-play.
Get insights like this every week
Join 26,000+ leaders staying ahead of the flexible talent market.
Subscribe Now →