AI/ML Semantic Matching for Recruitment: A Large-Scale Case Study on Expertini's Global Talent Platform

AI/ML Semantic Matching for Recruitment: A Large-Scale Case Study on Expertini's Global Talent Platform

AI/ML Semantic Matching for Recruitment: A Large-Scale Case Study on Expertini's Global Talent Platform - Brazil Jobs Expertini

Research Publication | AI/ML Semantic Matching

A.H. Syed  ·  S.A. Habeebi  ·  Dr. S.M.M. Habibi

Abstract

Expertini's semantic matching engine redefines how candidates and jobs are paired across a globally distributed recruitment platform serving over 1.25 million monthly visits and 845,647 registered users across 150+ countries (Expertini Public Data, 2026). Unlike traditional applicant tracking systems (ATS) that rely on keyword overlap, Expertini deploys Natural Language Processing (NLP) via the Semantic technology to extract meaning-based representations of skills, experience, and job requirements.

A critical conceptual distinction underlies this work: Semantic Similarity is the objective — enabling the system to understand that "feline" and "cat" mean the same thing, or that "NLP" and "natural language processing" are equivalent. Cosine Similarity is the mathematical instrument used to measure that semantic proximity in vector space, and is addressed in our companion papers [13][14]. This paper details the NLP pipeline, the semantic Python library, and the weighted Candidate Match Score formula, benchmarked against keyword-based ATS baselines and manual CV review.

I. Introduction

The global recruitment industry processes hundreds of millions of applications annually. Traditional applicant tracking systems (ATS) have historically filtered candidates using lexical keyword matching — a paradigm that, while computationally inexpensive, systematically fails to understand the meaning behind words. A candidate proficient in "machine learning" may be excluded by an ATS configured to search for "predictive modelling," despite the functional equivalence of these skills.

Expertini, established in 2008 and operating across 251 country-specific subdomains with over 15 million jobs globally, encountered this limitation at scale. With 845,647 registered users and over 1.25 million monthly visits across 150+ countries (Expertini Public Data, 2026), the platform required a matching solution capable of navigating linguistic synonymy, abbreviation variance, and cross-domain skill transferability — without perpetuating the demographic biases inherent in keyword-frequency approaches.

This paper is the third in Expertini's series of research publications on AI-powered recruitment [13][14]. It focuses on Semantic Similarity as the target capability and the NLP toolchain — including the semantic Python library — used to realise it in production. The paper draws a precise distinction between Semantic Similarity (the what) and Cosine Similarity (the how), introduces the weighted Candidate Match Score formulation, and benchmarks its performance against manual CV review [15] and keyword-based ATS baselines.

II. Semantic Similarity vs. Cosine Similarity: A Precise Distinction

A foundational source of confusion in recruitment AI literature is the conflation of Semantic Similarity with Cosine Similarity. These are distinct concepts operating at different levels of abstraction, and precision in their usage is essential for reproducible research.

Semantic Similarity — The Goal

Semantic Similarity is the objective: the ability of a system to recognise that two textual expressions carry equivalent or closely related meaning, regardless of the specific words used.

For example: "feline" ≈ "cat"  |  "software developer" ≈ "programmer"  |  "NLP" = "natural language processing"

It answers the question: do these two texts mean the same thing? This is what Expertini's system seeks to determine when matching a candidate's resume to a job description. It is achieved through the semantic Python library and NLP pipeline described in Section III.

Cosine Similarity — The Mathematical Method

Cosine Similarity is the mathematical instrument used to measure semantic proximity. Once texts are encoded as vectors in a high-dimensional embedding space, cosine similarity computes the cosine of the angle between those vectors — producing a value in [0, 1] indicating how semantically aligned the two representations are.

It answers: how do we quantify that closeness? The formula, model fine-tuning, and vector retrieval methodology using Cosine Similarity are addressed in detail in our companion papers [13][14]. The present paper focuses on the semantic extraction layer that precedes and enables this computation.

In plain terms: Semantic Similarity is what you want — you want the computer to understand that "feline" and "cat" mean the same thing. Cosine Similarity is how you get there — it is the mathematical calculation used to measure that meaning by computing the angular distance between word or sentence vectors in a high-dimensional space.

III. NLP Foundation: The Semantic Python library

Expertini's NLP pipeline is anchored by the semantic Python library — a purpose-built toolkit for extracting structured semantic information from unstructured text. Unlike general-purpose NLP frameworks that require extensive configuration for domain-specific extraction, semantic provides out-of-the-box support for the types of entities most relevant to recruitment text: skills, dates, numeric quantities, mathematical expressions, and unit conversions. It is installed via the Python Package Index:

# Install the semantic library
pip install semantic

# Core imports for recruitment NLP
from semantic import parse
from semantic.numbers import NumberService
from semantic.dates import DateService
from semantic.units import ConversionService

Key Capabilities of the semantic Library in Recruitment Context

  • ✔️ Text Parsing & Conceptual Extractionparse() converts raw resume and job description text into structured semantic components, identifying skills, roles, and contextual relationships. It understands that "5 years of Python development" encodes both a skill ("Python") and an experience duration ("5 years").
  • ✔️ Number ServiceNumberService normalises numeric expressions: "ten years," "10 yrs," and "a decade of experience" all resolve to the canonical value 10 — critical for consistent experience-duration matching.
  • ✔️ Date ServiceDateService extracts and normalises temporal references, computing actual experience durations from resume date ranges (e.g., "Jan 2018 – Present" → 7.1 years), feeding directly into the Candidate Skill Score calculation.
  • ✔️ Unit Conversion ServiceConversionService handles cross-unit normalisation for global recruitment: salary figures in different currencies, distance for commute filtering, and educational grade normalisation across international systems.

Code Examples: semantic Library in Action

# 1. Parse a resume fragment for skill and context extraction
result = parse("Led team of 8 engineers building NLP pipelines in Python")
# → extracts: role=lead, team_size=8, skills=[NLP, Python], domain=engineering

# 2. Normalise experience duration with NumberService
ns = NumberService()
ns.parse("ten years")    # → 10
ns.parse("half a decade") # → 5

# 3. Extract date ranges with DateService
ds = DateService()
ds.extractDate("from January 2018 to present")
# → start: 2018-01-01, end: today, duration: computed automatically

# 4. Map to Candidate Skill Score (0–100)
yrs = ds.extractDuration("8 years")  # → 8.0
css_python = min(100, (yrs / 10) * 100) # → 80.0

Compared to spaCy or NLTK, the semantic library offers domain-relevant structured extraction without requiring custom entity recognition model training. Its built-in understanding of numeric, temporal, and unit-based semantics maps directly to the variables required by the Candidate Match Score formula — reducing pipeline complexity and ensuring deterministic, interpretable extraction of the variables that matter most in recruitment context.

IV. System Architecture

Expertini's semantic matching pipeline operates as a Flask-based microservice integrated with a 9-node Elasticsearch cluster. Each node is configured with 128 GB RAM, Intel® Core™ i5-13500 CPU (14 cores / 20 threads, 2.5 GHz), and the system processes document pairs across 251 regional indices with sub-100ms end-to-end latency. The full pipeline is illustrated below.

Resume / CV PDF · DOCX · DOC Job Description Employer · ATS Feed pip install semantic NLP Pipeline — Semantic Python library parse() · DateService · NumberService · ConversionService Abbrev Expansion (proprietary self-maintained dict) · Skill Ontology Mapping · Language Detection Candidate Match Score Weighted formula — Approach A Explainable · 0–100 scale Vector Embedding 768-dim dense representation Cosine Similarity → ANN Retrieval Elasticsearch 9-Node Cluster 128 GB RAM · Intel i5-13500 · 14 cores / 20 threads · 251 Regional Indices · 15M+ Jobs Ranked Candidate Results Match Score · Semantic Rank · Recruiter Shortlist 0.06 sec / pair (max 2,500 tokens each)

Fig. 1. Expertini's semantic matching pipeline — from ingestion through the semantic Python NLP library, parallel scoring paths, and 9-node Elasticsearch cluster output.

V. Methodology: Weighted Candidate Match Score

Building on Expertini's prior published research [14], the Candidate Match Score (CMS) is defined as a weighted sum normalised by the total importance of job requirements. This ensures that critical skills contribute proportionally more to the final score than peripheral ones. The core formula (Approach A) is:

CMS = Σ (CSSi × JRISi) / Σ JRISi       (Equation 1 — Approach A)

Where CSSi = Candidate Skill Score for skill i (0–100, representing proficiency); JRISi = Job Requirement Importance Score for skill i (0–100, representing criticality to the role); the denominator is the sum of all importance scores, ensuring the result is bounded and meaningful on a 0–100 scale.

Why Not Approach B (÷ N)?

An alternative formulation, Approach B, divides the weighted sum by N (total number of requirements), treating all job requirements as equally important:

CMS_B = Σ (CSSi × JRISi) / N       (Equation 2 — Approach B, not recommended)

This produces unrealistic, unbounded results. Dividing by N assumes each job requirement has equal importance — which is rarely true in practice. If a role requires Python (critical), communication skills (important), and Excel (peripheral), Approach B awards each an equal one-third share of the final score, systematically understating the primacy of the critical skill. Approach A weights each skill's contribution by its declared importance, producing a score that accurately reflects the candidate's alignment with what the role actually demands.

Numerical Example: Approach A vs. Approach B

Skill CSS JRIS CSS × JRIS
Skill 1 — Python (critical) 90 100 9,000
Skill 2 — Communication (important) 70 80 5,600
Skill 3 — Excel (less important) 60 30 1,800
Approach A Score  (÷ Σ JRIS = 210) ≈ 78.10 ✔
Approach B Score  (÷ N = 3) ≈ 5,466.67 ✗

Approach B's result (5,466.67) is numerically incoherent on a 0–100 scale and is not practically useful for candidate ranking. Approach A yields a meaningful, bounded, interpretable score of 78.10.

Abbreviation and Synonym Expansion

Prior to scoring, all skill tokens undergo expansion via Expertini's self-maintained proprietary dictionary, which maps abbreviations and colloquial forms to their canonical equivalents — for example, "k8s" → "Kubernetes", "ML" → "machine learning", "CPA" → "Certified Public Accountant". The size and composition of this dictionary is not disclosed publicly, as it represents a core competitive asset developed over 15+ years of platform operation. Expansion occurs before embedding and before Match Score computation, ensuring that surface-form variation does not penalise candidates who express equivalent skills differently.

Note on scope: The Cosine Similarity methodology — including the vector embedding model and ANN retrieval framework — is addressed in detail in the companion papers [13][14]. The present paper is intentionally focused on the semantic extraction layer (semantic Python library) and the weighted Candidate Match Score formulation. Two components — model fine-tuning and continuous learning loop — are planned for the next system iteration and will be reported in future publications.

VI. Why Semantic Matching Supersedes Keyword ATS and Reduces Bias

Keyword-based ATS systems operate on lexical identity: a candidate's resume must contain the exact character sequence present in the job description filter to be returned as a match. This creates a cascade of systematic failures in real-world recruitment that semantic matching resolves.

The Fundamental Failure of Keyword Matching

Synonymy failures. A data scientist who writes "predictive modelling" is excluded from a search for "machine learning" — despite near-complete functional overlap. In Expertini's production analysis, an estimated 30–35% of highly qualified candidates were invisible to keyword ATS due to terminological mismatch alone. This is a "hidden talent pool" that semantic matching systematically recovers.

Abbreviation failures. "NLP," "Natural Language Processing," "natural-language processing," and "computational linguistics" are treated as four entirely different skills by a keyword system — but are semantically equivalent in most recruitment contexts.

Context blindness. A keyword system matching "Python" cannot distinguish between a Python programming expert and a document that mentions Python once in a certification list. Semantic matching understands contextual weight and proficiency signals, not just presence or absence of a term.

Semantic Matching as a Bias Reduction Mechanism

Keyword-based ATS systems encode systemic bias through several mechanisms. Resume formatting norms, vocabulary choices, and keyword density correlate with educational background, socioeconomic status, and cultural writing conventions — not with actual competency. A candidate who writes a concise, experience-dense resume will score lower than one who keyword-stuffs, even if the former is more qualified.

Semantic matching disrupts this pattern by evaluating meaning, not form. Two candidates whose resumes describe the same competency in different words receive equivalent semantic scores. This levels the playing field for candidates from diverse linguistic, educational, and cultural backgrounds — a critical consideration for a platform serving 150+ countries.

Expertini's preprocessing pipeline explicitly strips personal identifiers — name, age, nationality, gender markers — before semantic encoding. The matching operates exclusively on skills, experience descriptions, and qualifications. Bi-annual fairness audits compare match score distributions across synthetic candidate pairs that are identical in professional content but vary in inferred demographic signals, with parity gaps below 2% maintained — compared to gaps of 8–14% observed in keyword-baseline systems on the same test populations.

Semantic Matching vs. Keyword ATS — Capability Comparison

Capability Keyword ATS Expertini Semantic
Synonym recognition ✗ None ✔ Full
Abbreviation handling ✗ Partial / None ✔ Proprietary dict
Context understanding ✗ None ✔ NLP pipeline
Cross-language matching ✗ None ✔ Multilingual
Demographic bias exposure High (8–14% gap) Mitigated (<2% gap)
Resume format sensitivity High Low
Experience duration parsing ✗ None ✔ DateService
Explainability Partial ✔ Match Score 0–100

VII. Performance Benchmarking vs. Manual CV Review

Filipov et al. [15] introduced CV3 — a visual analytics system for the manual exploration, assessment, and comparison of CVs (first published 10 July 2019). CV3 provides a structured interface for human reviewers to evaluate candidates side-by-side, representing the state of the art in augmented manual review. While CV3 significantly improves on unassisted screening, it remains fundamentally human-paced and subject to reviewer fatigue, anchoring bias, and inconsistency across sessions.

Expertini's semantic matching infrastructure provides a quantitative alternative operating at machine speed. The benchmarking below was conducted on the production configuration: 9 nodes, each with 128 GB RAM, Intel® Core™ i5-13500 (14 cores / 20 threads, 2.5 GHz base clock), processing documents at a maximum length of 2,500 tokens for both resume and job description. All latency measurements are end-to-end wall-clock times inclusive of NLP preprocessing and scoring.

Throughput & Latency: Semantic Matching vs. Manual Review

Method Time / Pair Pairs / Hour Consistency Bias Risk
Manual Review (unaided) 4–8 min ~10–15 Low High
CV3 — Visual Analytics [15] 2–4 min ~20–30 Medium Medium
Keyword ATS (BM25) < 1 sec High High High (lexical)
Expertini Semantic (Ours) 0.06 sec > 60,000 Very High Mitigated

Elasticsearch Cluster Latency by Document Length

Operation Doc Length p50 Latency p99 Latency
Semantic parse + score ≤ 500 tokens 0.02 sec 0.04 sec
Semantic parse + score ≤ 1,000 tokens 0.04 sec 0.07 sec
Semantic parse + score ≤ 2,500 tokens 0.06 sec 0.12 sec
ANN retrieval (HNSW) 0.012 sec 0.041 sec
Batch re-index (251 indices) 4.2M documents / hour

VIII. Results and Platform Impact

As of 2026, Expertini's platform records 845,647 registered users and 1.25 million monthly visits (Expertini Public Data, 2026). Approximately 75% of registered users have engaged with the Resume Score feature powered by the semantic matching engine — representing over 634,000 individuals whose candidate profiles have been semantically scored against job requirements. This constitutes one of the largest real-world deployments of NLP-based candidate matching reported in the academic literature.

Match Quality vs. Keyword Baseline

System Precision@5 MRR NDCG@10
BM25 / TF-IDF Keyword 0.531 0.542 0.511
Structured ATS Filter 0.488 0.503 0.476
Expertini Semantic (Ours) 0.814 0.836 0.823

Test set: 85,000 held-out pairs across 12 countries. Relevance labels from recruiter shortlisting decisions. Improvements over BM25 are statistically significant (p < 0.001, paired t-test).

Platform Engagement Impact

  • ✔️ Job click-through rate (CTR): 3.2% → 5.8%  (+81.3%)
  • ✔️ Application completion rate: 21.4% → 34.7%  (+62.1%)
  • ✔️ Recruiter shortlist acceptance: 28.6% → 51.3%  (+79.4%)
  • ✔️ Time-to-shortlist: 8.3 days → 3.9 days  (−53.0%)
  • ✔️ Candidate alert dropout: 41.2% → 22.7%  (−44.9%)

IX. Conclusion

This paper has presented a comprehensive account of Expertini's production semantic matching system for global recruitment. We have established a precise and important conceptual distinction: Semantic Similarity is what you want — the system understanding that "feline" and "cat" mean the same thing, or that "NLP" and "natural language processing" are identical concepts. Cosine Similarity is how you get there — it is the mathematical calculation used to measure that meaning by computing angular distance between vectors in high-dimensional space, addressed in our companion papers [13][14].

The present paper's contributions include: the architectural integration of the semantic Python library (pip install semantic) for structured information extraction from recruitment text; the weighted Candidate Match Score (Approach A) with formal justification of its superiority over the naïve Approach B; systematic analysis of how semantic matching surpasses keyword ATS in accuracy, inclusivity, and bias reduction; and performance benchmarking showing 0.06-second pair processing on a 9-node Elasticsearch cluster (128 GB RAM, Intel i5-13500, 14 cores) for documents up to 2,500 tokens — over 2,000× faster than structured human review while improving match quality.

With 845,647 registered users, 1.25 million monthly visits, and 75% of users engaging with AI-powered Resume Score, Expertini's platform represents one of the largest real-world deployments of semantic recruitment matching in the academic literature. Future work will implement the model fine-tuning pipeline and continuous learning loop, with results to be published in subsequent contributions.


    FAQs: AI Semantic Matching and Its Impact


References

  1. S. E. Robertson and S. Walker, "Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval," SIGIR '94, pp. 232–241, 1994.
  2. C. D. Manning, P. Raghavan, and H. Schütze, Introduction to Information Retrieval. Cambridge University Press, 2008.
  3. T. Mikolov et al., "Distributed representations of words and phrases," NeurIPS, pp. 3111–3119, 2013.
  4. J. Devlin et al., "BERT: Pre-training of deep bidirectional transformers for language understanding," NAACL-HLT, pp. 4171–4186, 2019.
  5. N. Reimers and I. Gurevych, "Sentence-BERT: Sentence embeddings using Siamese BERT-networks," EMNLP, pp. 3982–3992, 2019.
  6. C. Qin et al., "Enhancing person-job fit for talent recruitment," SIGIR '18, pp. 25–34, 2018.
  7. A. Köchling and M. C. Wehner, "Discriminated by an algorithm: A systematic review of bias in AI recruitment," Business Research, vol. 13, no. 3, pp. 795–848, 2020.
  8. M. Raghavan et al., "Mitigating bias in algorithmic hiring," ACM FAccT, 2020.
  9. R. Singh et al., "AI in recruitment: Benefits and pitfalls," Int. J. Human Resource Management, vol. 31, no. 2, pp. 352–370, 2020.
  10. A. Rhea et al., "Post-pandemic trends in AI-powered hiring," HR Tech Review, vol. 9, no. 2, pp. 55–68, 2022.
  11. Expertini (2016). Expertini global job search engine launches new social insight feature. Press Release. Available: https://expertini.com/prnews/article/expertini-global-jobsearch-engine-launches-new-social-insight-feature-2016-11-25/
  12. Expertini Public Data (2026). Platform Statistics. Available: https://expertini.com/api/statistics/ [Accessed: Feb. 2026].
  13. A. H. Syed, "Expertini analyzed how artificial intelligence is impacting the recruitment industry: A revolutionary age in computing catalyst," SSRN, abstract_id=4779081, 2024. Available: http://dx.doi.org/10.2139/ssrn.4779081
  14. A. H. Syed, "Leveraging mathematical and artificial intelligence for automated resume screening: A study by Expertini.com," SSRN, abstract_id=4995903, 2024. Available: http://dx.doi.org/10.2139/ssrn.4995903
  15. V. Filipov et al., "CV3: Visual exploration, assessment, and comparison of CVs," Computer Graphics Forum, first published 10 July 2019.
Ipameri, Brazil Job Search & Hiring Platform
Connecting job seekers with opportunities and employers with talent in Ipameri and throughout Brazil