SCIENCE & METHODOLOGY

How PERSONA works - and why you can trust it

A detailed look at the assessment framework, AI interpretation layer, and validation evidence behind PERSONA. This page is designed for both technical evaluators and decision-makers who need clear, grounded evidence.

1

Structured Assessment

Candidates complete a situational judgment assessment tailored to their career level - from early career to senior leadership. Each question presents a realistic workplace scenario with four response options. There is no wrong answer. Each choice reveals a different behavioral tendency.

2

AI Interpretation

Responses are interpreted by a fine-tuned, self-hosted AI model - not a third-party API. Candidate data never leaves our infrastructure. The model was tuned with input from psychology professors and is designed for interpretive consistency.

3

Decision Support

Unlike traditional assessments that deliver a fixed report, PERSONA lets you ask any question - in your own words, about your specific concerns. Use it before the interview, during the interview in real time, or months later to plan a difficult conversation.

THE ASSESSMENT

Career-leveled situational judgment

PERSONA uses a structured situational judgment test where candidates respond to realistic workplace scenarios. Each question presents four response options. There is no wrong answer. Each option reflects a different behavioral tendency.

The instrument is career-leveled across six tiers, from L1 to L6. L1 and L2 focus on core competencies such as collaboration, handling ambiguity, and task prioritization. L3 and L4 add management dimensions. L5 and L6 introduce bespoke constructs that reflect the realities of the role and organization, including Trust Restoration, Overcoming Regional Silos, Political Acumen, and Influence Without Formal Authority.

For L1 to L4, the standard assessment categories include Situational Judgment, Cognitive Ability, Motivation and Values, Emotional Intelligence, and Adaptability. At L5 and L6, categories are customized based on organizational context gathered through HRBP briefings.

The four-option forced-choice format with no single correct answer is deliberate. It reduces social desirability bias, captures behavioral tendencies rather than self-reported traits, and makes the assessment harder to game.

For senior hires at L5 and L6, PERSONA conducts briefings with HRBPs to understand team dynamics, political landscape, and cultural factors. Custom categories are then built around those realities so the assessment stays relevant to the actual decision context.

The candidate CV or resume is used for experience confirmation only. It provides context for interpretation, such as years in regulated industries, but it is not used to infer personality traits or behavioral tendencies.

L1-L6 reference model

LevelTypical scopeExample categories
L1Early careerCollaboration, handling ambiguity, task prioritization
L2Emerging professionalCollaboration, execution discipline, adaptability
L3First-time managerPeople management, coaching, decision quality
L4Mid managementCross-team coordination, strategic prioritization, leadership communication
L5Senior leadershipTrust Restoration, Overcoming Regional Silos, Political Acumen
L6Executive / C-suiteInfluence Without Formal Authority, enterprise alignment, stakeholder navigation
AI INTERPRETATION

From responses to structured insight

Assessment responses are interpreted by a fine-tuned, self-hosted AI model. Candidate data never leaves controlled infrastructure. The model was tuned with input from psychology professors and is optimized for interpretive consistency, not memorization.

Every user query is normalized to a fixed intent before interpretation begins. Whether a user asks, "Will he push back?" or "Is this person assertive enough?" the system resolves both to the same underlying question before it answers. This is how consistency is maintained across different phrasing.

Outputs are produced from structured internal representations, not unconstrained free-form generation. Each interpretive claim is decomposed into atomic components and checked against defined boundaries before it reaches the user. The model cannot assert claims that are not grounded in assessment evidence.

Guardrails define what the model can and cannot claim. It does not predict job performance, does not diagnose personality disorders, and does not claim more than the evidence supports. If a question is outside scope, the system states that clearly.

The model is self-hosted with no third-party dependency for inference. No candidate data is sent to OpenAI, Anthropic, Google, or any external API. This is an enterprise requirement, especially in the GCC where data sovereignty is a core procurement criterion.

User Question
Intent Normalization
Structured Interpretation
Boundary Check
Response
VALIDATION

Three independent studies

PERSONA's evidence base comes from three independent validation studies. Each study tests a different part of the system so reliability, practical utility, and interpretive consistency are evaluated separately.

r = 0.88

Study 1 - Test-Retest Reliability

Sample: n = 400

Design: 400 candidates completed the assessment twice. Response patterns were compared for consistency.

Result: Strong test-retest reliability (r = 0.88), exceeding established industry benchmarks for SJT instruments.

What it means: The assessment produces stable, consistent results. The same person taking the assessment twice produces meaningfully similar profiles.

4.76 / 5

Study 2 - Criterion Validity (Hiring Manager Accuracy)

Sample: n = 63

Design: 63 hiring managers were surveyed after working with their hired candidate for 7+ months. The observation window was deliberately chosen to exceed double the standard probation period in the GCC, typically 3 months.

Result: Mean accuracy rating of 4.76 out of 5.

What it means: Hiring managers who used PERSONA to inform decisions reported that the assessment reflected what they observed in the first 7+ months of the hire.

r = 0.93

Study 3 - AI Interpretation Consistency

Sample: n = 230

Design: In a three-phase design, 230 candidate profiles were re-interpreted after a full model re-initialization with session context cleared. The study tested identical queries and paraphrased queries with the same intent.

Result: r = 0.93 for identical queries and r = 0.89 for paraphrased queries. Consistency was further confirmed through sentiment analysis of paired responses.

What it means: The interpretation layer stays consistent across wording changes and over time. Re-initializing the model does not change the interpretation.

Full methodology details, including statistical methodology and full study design documentation, are available upon request for enterprise evaluation processes.

DATA GOVERNANCE

Enterprise-grade data handling

PERSONA runs on self-hosted infrastructure. No candidate data is transmitted to third-party AI providers. Processing stays inside controlled infrastructure from assessment ingestion through interpretation output.

Data sovereignty is built into deployment design. This is especially relevant for GCC enterprises where candidate data residency and control are key requirements. Assessment data remains within the deployment environment.

Assessment responses and generated interpretations are stored securely for audit and ongoing use. Raw model weights and training corpora are not exposed, and individual candidate data is never used to retrain the model.

Guardrail architecture enforces explicit boundaries on permissible claims. Outputs are checked against defined limits before display, and the model declines to answer rather than speculate beyond available evidence.

PERSONA is designed with awareness of emerging AI regulation, including EU AI Act principles around interpretability, human oversight, and data minimization. The system supports decisions and documents evidence; it does not make autonomous hiring decisions.

POSITIONING

How PERSONA relates to established approaches

CategoryTraditional Psychometrics (SHL, Hogan)Generic AI ToolsPERSONA
Assessment methodStandardized trait inventoryNo structured assessmentCareer-leveled situational judgment
InterpretationCertified practitioner requiredGeneral-purpose LLMFine-tuned, domain-specific AI
Output formatFixed reportUnstructured textStructured insight + open-ended Q&A
CustomizationStandard for all organizationsPrompt-dependentOrganization-specific at senior levels
Ongoing useOne-time reportNo candidate contextReturn anytime, ask new questions
Data handlingThird-party platformData sent to AI providerSelf-hosted, no third-party calls

PERSONA preserves what works in traditional psychometrics - structured measurement, evidence grounding, and psychometric rigor - while advancing the interpretation and accessibility layer through domain-specific AI.

Questions about our methodology?

We welcome technical evaluation. Request our full validation documentation or schedule a methodology walkthrough with our team.