No black box. No AI magic. Full transparency.

You deserve to know how it works

Most HR analytics tools say "trust our AI." We say: here's exactly how we calculate every score.

Why we don't say "AI predicts"

You've probably seen tools that promise:

"Our AI analyzes 1,000+ signals to predict who will leave with 95% accuracy!"

Here's the problem: they can't explain why.

When HR asks "why is this team flagged?", the answer is: "the model says so."

That's not helpful. You can't take action on mystery.

Our philosophy:

If we can't explain why a score is high, we won't show it.
Every number breaks down into factors you can verify.

Typical "AI" tool:

Risk: HIGH

"Our proprietary algorithm detected patterns suggesting elevated risk. Contact us for details."

PeopleSignals:

7.2 Attrition Risk

Because:

  • • 45% of team with tenure < 6 months (+3.0)
  • • 2 manager changes in 90 days (+2.5)
  • • No vacations > 60 days (+1.7)

Based on:

Work Institute (38% first-year turnover), Gallup (70% manager impact)

Four principles

Evidence-based

Risk factors grounded in peer-reviewed research from Work Institute, Gallup, Deloitte, Grant Thornton, Culture Amp. Not hunches.

Explainable

Every risk score shows which factors influenced it, how it changed over time, and what we recommend doing.

?

Conservative

If data is insufficient, we honestly show N/A. Confidence levels: HIGH / MEDIUM / LOW / N/A. No fake precision.

Team-level

Work at team level by default to protect people from personal labeling. Minimum group size: 5 people.

The research behind every factor

Click to expand and see how each piece of research informs our model.

Work Institute 2024

250,000+ exit interviews

Key Finding

"38% of resignations happen in the first year of employment"

How We Use It

  • Informs risk factor weighting
  • Validates threshold selection
  • Backs recommendation engine

Gallup 2024

122,000 respondents across 160 countries

Key Finding

"Managers account for 70% of variance in employee engagement"

How We Use It

  • Informs risk factor weighting
  • Validates threshold selection
  • Backs recommendation engine

Grant Thornton 2024

5,000+ professionals

Key Finding

"54% of employees cite overwork as the primary cause of burnout"

How We Use It

  • Informs risk factor weighting
  • Validates threshold selection
  • Backs recommendation engine

Deloitte Burnout Survey

Global workforce study

Key Finding

"51% say they need a vacation just to recover from work"

How We Use It

  • Informs risk factor weighting
  • Validates threshold selection
  • Backs recommendation engine

Culture Amp Q3 2024

Industry benchmark data

Key Finding

"Average industry eNPS = 27"

How We Use It

  • Informs risk factor weighting
  • Validates threshold selection
  • Backs recommendation engine

How scores work

From raw data to actionable insight

HRIS Data

Tenure, time-off, org changes, role data

?

Signal Extraction

Research-backed factors identified

?

Factor Calculation

Weighted by evidence strength

?

Normalize to 0-10

Team score with confidence level

?

Explanation

What contributed + recommendations

Example breakdown

Team Score: 7.2

Contributing factors:

  • 45% of team with tenure < 6 months (+3.0)
  • 2 manager changes in last 90 days (+2.5)
  • No vacations > 60 days for 70% of team (+1.7)

Scale interpretation:

0-3

Low

4-6

Medium

7-10

High

Try the calculation yourself

Adjust the sliders to see how different factors affect the attrition risk score. This is a simplified demo — the full model includes more factors for attrition, burnout, and engagement.

Team Factors

Work Institute: 38% leave in year 1

Gallup: 70% of engagement from manager

Deloitte: 51% cite vacation as critical

Social contagion: departures trigger departures

Calculated Score

3.2

Attrition Risk (0-10)

Low Risk
Tenure factor (×1) +0.0
Manager factor (×0.8) +0.0
Vacation factor (×0.7) +0.0
Churn factor (×0.9) +0.0
Total (normalized) 3.2 / 10
Show exact formula
score = (
  tenure_factor × 1.0 +
  manager_factor × 0.8 +
  vacation_factor × 0.7 +
  churn_factor × 0.9
) / max_possible × 10

This is a simplified demo. The full model includes 6 factors for attrition, 6 for burnout, and 3 for engagement.

Want to see how this applies to your actual team data?

Calculate your turnover cost

Confidence levels

We're honest about what we know and what we don't

Confidence depends on:

Coverage: How many data sources are connected
Freshness: How recent the data is
Sample size: Team size and data volume
History: How long we've been tracking

HIGH

All sources, 90+ days history, complete data

MEDIUM

Some sources, 30-90 days history, partial data

LOW / N/A

Limited sources, < 30 days, insufficient data

How we know it works

We don't just build — we measure. Here's how we validate our methodology.

> 75%

Attrition Precision

True positives among predictions

r > 0.5

Burnout Correlation

Correlation with actual burnout incidents

< 20%

False Positive Rate

Incorrect high-risk alerts

Validation Timeline

30

Days 1-30: Baseline

Initial scores with LOW confidence. Collecting baseline data.

90

Days 30-90: Building

Confidence improves to MEDIUM. Trends become visible.

After 90 Days: Validation

Retrospective analysis comparing predictions to actual outcomes. Calibration to your company's specifics.

We'll show you the retrospective analysis after 90 days — so you can see exactly how accurate the predictions were.

Privacy-first approach

What we DON'T do

  • Read message content
  • Monitor screens or keystrokes
  • Build individual 'loyalty scores'
  • Give 'who will leave' lists
  • Sell your data to third parties

What we DO

  • Analyze metadata only (aggregated)
  • Work at team level by default
  • Explain every metric transparently
  • Give you full control over data
  • GDPR-ready data practices

What we can't do (yet)

We believe honesty builds trust. Here are our limitations.

Incomplete data

Impact: Low confidence scores or N/A

Mitigation: Connect more sources, improve data quality

Incorrect org structure

Impact: Inaccurate team aggregation

Mitigation: Regular org chart sync from HRIS

Not ready to act

Impact: Predictive insights won't become preventive

Mitigation: Ensure management commitment before starting

Team < 5 people

Impact: Privacy threshold not met

Mitigation: Aggregate at department level or accept N/A

PeopleSignals helps you make better decisions — but doesn't replace good management.

Methodology questions

Is this surveillance?

No. PeopleSignals analyzes only metadata (when meetings happen, not what was discussed). We don't read messages, track screens, or monitor keystrokes. Everything is team-level aggregated.

Do you give a "who will leave" list?

No. We provide team-level risk scores by default. Individual scores are available only if explicitly configured by HR and meet privacy thresholds (team size, consent, etc.).

Why is something marked N/A?

N/A means insufficient data for a reliable score. This can happen with: new teams, incomplete data sources, teams below 5 people, or recent organizational changes.

How accurate are your predictions?

We aim for >75% precision on attrition risk and r>0.5 correlation on burnout. Actual accuracy improves over time as the system learns your company's patterns. We validate retrospectively.

Do you use AI or machine learning?

Not in MVP. We use rule-based logic with research-backed weights. This makes the system transparent and explainable. ML may come later for calibration, but explainability remains priority.

Ready to see your risks?

Connect your HRIS in 15 minutes. See first insights within 48 hours.

Questions about methodology? Email our team directly