Architecture Overview

110 Global AI is an AI-native workforce management platform built with a modern, scalable architecture.

Tech Stack

Backend

Express.js (Node.js)

Database

Neon PostgreSQL (Serverless)

Hosting

Render Web Service

AI Services

Anthropic Claude / OpenAI

File Storage

Cloudflare R2 (via Polsia Proxy)

Email

Postmark (via Polsia Proxy)

System Architecture

┌─────────────────────────────────────────────────────────────────┐
│                        INTERNET                                  │
└────────────────────────────┬────────────────────────────────────┘
                             │
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│                   Render Web Service                             │
│               https://one10-global-ai.polsia.app                │
│                                                                  │
│  ┌─────────────────────────────────────────────────────────────┐│
│  │                    Express.js Server                        ││
│  │                     (server.js)                             ││
│  │                                                              ││
│  │  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐    ││
│  │  │  Auth    │  │ Forecast │  │ Schedule │  │   WFM    │    ││
│  │  │  Routes  │  │   API    │  │   API    │  │  Erlang  │    ││
│  │  └──────────┘  └──────────┘  └──────────┘  └──────────┘    ││
│  └─────────────────────────────────────────────────────────────┘│
└────────────────────────────┬────────────────────────────────────┘
                             │
          ┌──────────────────┼──────────────────┐
          ▼                  ▼                  ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│  Neon Postgres  │ │  Polsia AI API  │ │   Polsia R2     │
│   (Database)    │ │  (Claude/GPT)   │ │  (File Store)   │
└─────────────────┘ └─────────────────┘ └─────────────────┘
          

Environment Variables

All environment variables are managed through Render's dashboard. These are automatically injected at runtime.

Variable Purpose Required
DATABASE_URL Neon PostgreSQL connection string Yes
JWT_SECRET Secret key for signing JWT auth tokens Yes
NODE_ENV Environment mode (production/development) Yes
POLSIA_API_KEY API key for Polsia services (AI, email, storage) Yes
ANTHROPIC_API_KEY Anthropic Claude API key for AI features Yes
OPENAI_API_KEY OpenAI API key (proxied through Polsia) Optional
OPENAI_BASE_URL OpenAI proxy URL (Polsia endpoint) Optional
POLSIA_R2_BASE_URL Base URL for R2 file storage proxy Optional
POLSIA_EMAIL_API Postmark email API proxy URL Optional

Security Note

Never commit secrets to the repository. All sensitive values should be managed through Render's environment variables dashboard.

Deployment Process

The app uses Render's auto-deploy feature connected to GitHub. Every push to main triggers a new deployment.

Automatic Deploy Flow

  1. 1

    Push to GitHub

    Code is pushed to the main branch

  2. 2

    Build Phase

    Render runs npm install to install dependencies

  3. 3

    Database Migrations

    Runs npm run build which executes node migrate.js

  4. 4

    Start Server

    Executes npm start (node server.js)

  5. 5

    Health Check

    Render verifies /health returns 200 before routing traffic

Manual Deploy / Rollback

To manually trigger a deploy or rollback:

  1. 1. Go to the Render Dashboard
  2. 2. Select the 110-global-ai web service
  3. 3. Click Manual Deploy and choose a commit, or select a previous deploy to rollback

Build Command

npm install

Start Command

npm start

Database Access

The app uses Neon PostgreSQL, a serverless Postgres database with auto-scaling.

Connecting to the Database

Option 1: Neon Console (Recommended)

  1. 1. Go to console.neon.tech
  2. 2. Select your project and use the SQL Editor

Option 2: psql CLI

psql "postgresql://neondb_owner:****@ep-late-sky-ae17of8y.c-2.us-east-2.aws.neon.tech/neondb?sslmode=require"

Replace **** with the actual password from your DATABASE_URL environment variable.

Database Schema (Core Tables)

Table Purpose
call_data Historical call volume data by hour
forecasts AI-generated call volume predictions
processed_data ML-ready processed data with features
business_units Organizational units (channels, countries)
demo_requests Demo request form submissions
analytics_events Website analytics and event tracking
activity_logs Audit log of user actions
company_settings App configuration and branding
_migrations Tracks applied database migrations

Running Migrations

Migrations run automatically during each deploy. To add a new migration:

  1. 1. Create a file in migrations/ with format: YYYYMMDD_description.js
  2. 2. Export name and up(client) function
  3. 3. Push to main - migration runs on next deploy
// migrations/20260113_add_feature.js
module.exports = {
  name: 'add_feature',
  up: async (client) => {
    await client.query(`
      CREATE TABLE IF NOT EXISTS new_feature (
        id SERIAL PRIMARY KEY,
        name VARCHAR(255) NOT NULL,
        created_at TIMESTAMP DEFAULT NOW()
      )
    `);
  }
};

Local Development Setup

Follow these steps to run the app locally for development.

Prerequisites

  • Node.js 18+ (LTS recommended)
  • npm 9+ or yarn
  • Git
  • Access to GitHub repository

Setup Steps

1. Clone the Repository

git clone https://github.com/Polsia-Inc/110-global-ai.git
cd 110-global-ai

2. Install Dependencies

npm install

3. Create Environment File

Create a .env file in the project root:

# Database - get from Render env vars or Neon console
DATABASE_URL=postgresql://neondb_owner:****@ep-late-sky-ae17of8y.c-2.us-east-2.aws.neon.tech/neondb?sslmode=require

# Authentication
JWT_SECRET=your-dev-secret-key

# Environment
NODE_ENV=development

# Polsia API (get from Render env vars)
POLSIA_API_KEY=company_299_****
ANTHROPIC_API_KEY=sk-ant-****

# Optional
OPENAI_BASE_URL=https://polsia.com/ai/openai/v1
POLSIA_R2_BASE_URL=https://polsia.com
POLSIA_EMAIL_API=https://polsia.com/api/proxy/email

4. Run Migrations

npm run migrate

5. Start the Server

npm run dev

App will be available at http://localhost:3000

Development Tip

You can use the same production database for local development (be careful with destructive operations), or create a separate Neon branch for dev work.

Domain Configuration

The app is currently served on a Polsia subdomain with SSL automatically configured.

Current Setup

Primary Domain one10-global-ai.polsia.app
SSL Certificate Auto-managed (Let's Encrypt)
CDN Render Edge Network

Adding a Custom Domain

To use a custom domain (e.g., app.110global.ai):

  1. 1. Go to your Render Dashboard > 110-global-ai service > Settings > Custom Domains
  2. 2. Click "Add Custom Domain" and enter your domain
  3. 3. Add the provided CNAME record to your DNS provider
  4. 4. Wait for DNS propagation and SSL certificate issuance (usually 5-30 minutes)

Example DNS Record

Type: CNAME
Name: app
Value: one10-global-ai.onrender.com

Troubleshooting

Common issues and how to resolve them.

Deploy failed: Build error

Check the Render deploy logs for specific errors. Common causes:

  • Missing dependencies in package.json
  • Syntax errors in server.js or migrations
  • Invalid environment variable format
Database connection failed

Verify:

  • DATABASE_URL environment variable is set correctly
  • SSL mode is enabled (sslmode=require)
  • Neon project is not suspended (free tier auto-suspends)
  • IP is not blocked (if using connection pooling)
Health check failing

The /health endpoint must return 200. Check:

  • Server is binding to correct PORT (process.env.PORT)
  • No startup errors blocking the Express app
  • Health check endpoint exists in server.js
Migration failed

If a migration fails:

  • Check the _migrations table for which ran
  • Fix the SQL in the migration file
  • Migrations are idempotent - use CREATE TABLE IF NOT EXISTS
  • Never edit already-applied migrations
AI features not working

Verify API keys are configured:

  • ANTHROPIC_API_KEY for Claude features
  • POLSIA_API_KEY for proxied services
  • Check browser console for 401/403 errors

Forecast Lifecycle Specification

Complete documentation for the PlanXpress forecasting system covering Draft/Published workflows, accuracy tracking, and real-time operations.

Guiding Principle

"AI owns baseline, humans own exceptions." — The forecasting engine generates statistically optimal baselines. Humans layer business context on top — events, promotions, operational constraints. Neither replaces the other.

1. Core Concepts

Draft vs Published Forecasts

State Description Visibility Editable
Draft Working forecast, regenerated nightly Planners only Yes
Published Locked plan-of-record Everyone No

Key Behaviors

  • • Draft forecasts are always the latest ML output
  • • Publishing creates an immutable snapshot
  • • Published forecast = plan-of-record for scheduling

Version Format

v{YYYYMMDD}.{sequence}

  • • v20260116.1 — first publish on Jan 16
  • • v20260116.2 — republished same day

Scheduling Binding Rules

  • Binding: Schedules reference the published forecast version
  • Immutability: Republishing does NOT auto-update existing schedules
  • Reconciliation: System flags schedule-forecast drift when versions differ
  • Override: Users can manually rebind schedules to new forecast version

2. Daily Cadence

Auto-Regeneration (Nightly)

Every night at 02:00 local time:

  1. Pulls latest actuals (yesterday's data)
  2. Regenerates draft for next 14 days
  3. Compares to currently published forecast
  4. Flags significant changes

Republish Triggers

Recommendation triggered when ANY threshold exceeded:

  • Volume change: ≥ 5% daily total vs published
  • Peak change: ≥ 7% peak interval vs published
  • Persistent bias: 3+ consecutive days off

Example Republish Notification

⚠️ Republish Recommended for Acme Support

Draft forecast shows +6.2% daily volume vs published v20260114.1

Changes detected:
• Mon-Wed peak intervals shifted from 10:00-11:00 to 11:00-12:00
• Thursday volume increased 12% (marketing campaign effect?)

Actions: [Review Draft] [Publish Now] [Dismiss]

3. Locked Forecast Stability

Plan-of-Record (POR)

The published forecast is the plan-of-record used for:

  • • Schedule generation
  • • Capacity planning
  • • Budget/headcount decisions

Does NOT change unless explicitly republished.

Now View (Real-Time Operations)

Aspect Plan-of-Record Now View
Purpose Planning & scheduling Real-time operations
Updates Only on republish Every 15 minutes
Used by Workforce planners Supervisors, RTA
Stability Locked Dynamic

Now View Does NOT:

  • • Change the published forecast
  • • Auto-adjust scheduled headcount
  • • Trigger automatic staffing changes

Now View DOES:

  • • Provide situational awareness
  • • Recommend manual interventions
  • • Track intraday accuracy for learning

4. User Adjustments

Event Windows

Time periods where normal patterns don't apply — events override ML baseline.

{
  event_name: "Black Friday Sale",
  event_type: "promotional",           // promotional, operational, external
  start_date: "2026-11-27",
  end_date: "2026-11-30",
  impact_multiplier: 1.35,             // +35% expected volume
  intervals_affected: "all",           // or specific intervals
  notes: "Annual sale, expect 30-40% volume increase"
}

HOOP Changes (Hours of Operation)

  1. User updates HOOP in business unit settings
  2. System recalculates affected intervals
  3. Draft forecast regenerates with new HOOP
  4. User must republish to update POR

Scenario Sliders with Guardrails

Slider Range Guardrails
Overall volume -20% to +30% Warning at extremes
Peak multiplier 0.8x to 1.5x Capped to prevent unrealistic shapes
Trend strength 0.5x to 1.5x Cannot invert trend direction

5. Trend Damping

Raw ML models can overreact to recent changes. Trend damping smooths predictions to reduce volatility while preserving genuine shifts.

Short-Term (1-13 Weeks)

  • Rapid growth/decline (>15% wow): 70% trend, 30% historical mean
  • Stable pattern: 90% trend, 10% historical mean
  • New pattern: Gradual increase over 4 weeks

Long-Term (3-18 Months)

  • 0-3 months: 85% trend, 15% mean reversion
  • 3-6 months: 70% trend, 30% mean reversion
  • 6-12 months: 50% trend, 50% mean reversion
  • 12-18 months: 30% trend, 70% mean reversion

Damping Decay Curve

trend_influence = 0.85 × exp(-0.12 × months_ahead)
mean_reversion = 1 - trend_influence

6. Accuracy Tracking (WAPE)

WAPE Formula

WAPE = Σ|Actual - Forecast| / Σ|Actual| × 100

Why WAPE over MAPE: Handles zero-volume intervals, weights by volume, industry standard for WFM.

Health Labels

WAPE Range Label Interpretation
0-5% Excellent No action needed
5-10% Good Normal operating range
10-15% Fair Monitor for patterns
15%+ Needs Review Investigate root cause

Baseline vs Published Comparison

Scenario Interpretation
Published WAPE < Baseline WAPE Human adjustments improved accuracy
Published WAPE > Baseline WAPE Human adjustments degraded accuracy
Both high Model needs retraining or data quality issue
Both low System performing well

7. Holiday Type Classification (PlanXpress™)

The holiday library uses a Type A/B/C classification system to categorize 27 US holidays by demand impact.

Holiday Types

Type Name Description Holiday Day Factor
Type A Closure/Low Most offices closed, minimal demand 0.15 (85% reduction)
Type B High Demand Shopping/transaction holidays, high volume 1.45 (45% increase)
Type C Observed/Partial Some closures, mixed impact 0.60 (40% reduction)

D-3…D+7 Impact Envelope

Every holiday uses an 11-day impact envelope: 3 days before (D-3 to D-1), holiday day (D), and 7 days after (D+1 to D+7).

Pre-Lag Decay Weights

  • D-3: 0.30 (earliest impact)
  • D-2: 0.50 (building impact)
  • D-1: 0.80 (nearly full impact)

Post-Lag Decay Weights

  • D+1: 1.00 → D+2: 0.60 → D+3: 0.35
  • D+4: 0.20 → D+5: 0.10
  • D+6: 0.05 → D+7: 0.02

US Holiday Library (27 Pre-Configured)

Type A - Closures

  • New Year's Day, Christmas Day
  • Christmas Eve, Thanksgiving
  • Independence Day, New Year's Eve

Type B - High Demand

  • Black Friday, Cyber Monday
  • Mother's/Father's Day, Valentine's
  • Amazon Prime Day, Tax Day

Type C - Partial

  • MLK Day, Presidents' Day
  • Memorial Day, Labor Day
  • Easter, Juneteenth

Confidence-Based Blending

Occurrences Confidence Default Weight Learned Weight
0-1 No Data / Very Low 100% 0%
2 Low 70% 30%
3-4 Medium 30% 70%
5+ High 10% 90%

Auto-Trim Rule

Post-lag impact automatically stops when |Actual - Baseline| / Baseline ≤ 5% for 2 consecutive days.

8. Forecasting Sub-Navigation UI

The forecasting module uses a consistent 4-tab sub-navigation across all pages.

Tab Page Purpose
Overview /dashboard.html BU selector, version history, publish workflow
Accuracy /accuracy.html WAPE metrics, model tracking, bias visualization
Holidays /forecasting-holidays.html Per-BU holiday impact, Type A/B/C, overrides
Events & Adjustments /forecasting-events.html Event windows, campaign adjustments, recurring events

9. Bias Detection & Calibration

Bias Classification

Type Definition Action
Temporary 1-2 days of consistent over/under No automatic action
Persistent 3+ consecutive days of bias Triggers republish recommendation

Calibration Layers

Auto-calibration (Draft)

  • • Automatically incorporates recent bias
  • • Adjustment = rolling 7-day bias average
  • • Maximum auto-adjustment: ±10%

Manual calibration (Published)

  • • User reviews bias analysis
  • • Can accept auto-calibration or override
  • • Calibration persists until cleared

Calibration Waterfall

Final Forecast = ML Baseline
                 × Trend Adjustment
                 × Bias Calibration
                 × User Scenario Adjustments
                 × Event Window Multipliers

10. Intraday Recommendations

Trigger Thresholds

Trigger Threshold Lookahead
Volume surge Actual > Forecast + 15% Next 2 hours
Volume drop Actual < Forecast - 15% Next 2 hours
Trend shift Rolling 1hr trend diverges > 10% Next 4 hours
Abandonment spike > 2× normal rate Immediate

Allowed Moves

Move Type Allowed Constraints
Extend shifts +30 min to +2 hours Requires agent consent, max OT budget
Early release -30 min to -2 hours Minimum coverage maintained
Skill reassignment Yes Agent must be qualified
Break rescheduling ±30 minutes Compliance rules respected
VTO offer Yes Based on coverage threshold

Moves NOT Recommended

  • • Adding unscheduled agents (requires manual intervention)
  • • Cross-site moves (different system)
  • • Changes violating labor law

11. Deterministic Granularity Rules

Forecast granularity is determined by data quality, not user preference.

Granularity Selection Criteria
30-minute High data quality AND sufficient volume
60-minute Low data quality OR insufficient volume

Selection Logic

IF data_completeness < 0.7:
  → Use 60-minute (insufficient data for finer grain)

ELSE IF avg_volume_per_30min < 10:
  → Use 60-minute (small sample noise dominates)

ELSE IF variance_stability = "unstable" AND history_depth < 60:
  → Use 60-minute (need more history to trust 30-min patterns)

ELSE:
  → Use 30-minute

Granularity Lock

  • • Cannot be manually changed (deterministic)
  • • Rechecked monthly during accuracy review
  • • Change triggers notification if granularity shifts
  • • Historical data preserved at finest available grain

12. Output Formats

Core Prediction (per interval)

{
  date: "2026-01-16",
  interval_start: "09:00",
  interval_end: "09:30",               // or 10:00 for 60-min
  interval_index: 18,                  // 0-47 for 30-min, 0-23 for 60-min

  predicted_volume: 145,
  predicted_aht: 320,                  // Average handle time (seconds)
  predicted_occupancy: 0.82,           // Target occupancy

  confidence: {
    p50: { lower: 132, upper: 158 },
    p80: { lower: 118, upper: 172 },
    p95: { lower: 98, upper: 192 }
  }
}

Now View Output (intraday)

{
  timestamp: "2026-01-16T09:15:00Z",
  interval: "09:00-09:30",

  planned: {
    volume: 145,
    source: "v20260115.1"              // Published version
  },

  actual: {
    volume: 163,
    calls_in_queue: 8,
    current_asa: 24
  },

  projection: {
    rest_of_interval: 42,              // Expected additional calls
    rest_of_day_volume: 1240,
    rest_of_day_vs_plan: "+8.2%"
  },

  recommendation: {
    action: "extend_shifts",
    urgency: "moderate",
    details: "Consider extending 3 agents by 1 hour"
  }
}

Best Practices

Publish Workflow

  • Review draft comparison before publishing
  • Always add notes explaining the change
  • Consider impact on existing schedules
  • Notify stakeholders after significant changes

Accuracy Management

  • Track WAPE weekly, review monthly
  • Compare baseline vs published WAPE
  • Investigate Fridays with >10% WAPE
  • Document event impacts for future learning

Forecasting Pre-Processing Pipeline

Technical documentation for the 110 Global AI data pre-processing and BU classification logic.

Source Files

preprocess.js - Data validation, cleaning, and imputation | forecasting.js - Model selection and BU classification

1. Anomaly Detection

Anomalies are identified using hour-specific statistical analysis, accounting for natural variations in call volume throughout the day.

Methods Available

Method Formula Default Threshold
IQR (default) value < Q1 - (threshold × IQR) or value > Q3 + (threshold × IQR) 1.5
Z-Score |z-score| > threshold 1.5

Additional Detection Rules

Condition Severity Reason
call_count > mean × 10 Severe 10x spike - likely system event or campaign
call_count < 5 && mean > 50 Severe Near-zero when average is high - possible outage

IQR Detection Example

For hour 9am with stats { Q1: 50, Q3: 150, IQR: 100 } and threshold 1.5:

  • Lower bound: 50 - (1.5 × 100) = -100 → capped at 0
  • Upper bound: 150 + (1.5 × 100) = 300
  • • Values outside [0, 300] are flagged as outliers

2. Anomaly Handling

Key behavior: Outliers are flagged, NOT removed. They remain in training data.

Severity Classification

Severity IQR Trigger Z-Score Trigger
Moderate Value within 1 IQR beyond bounds Z-score between threshold and 1.5× threshold
Severe Value more than 1 IQR beyond bounds Z-score > 1.5× threshold

Impact on Data Quality Score

accuracy_score = 100 × (1 - outlier_ratio × 0.5 - severe_outlier_ratio × 0.3)

3. Missing Data Imputation

Method Description Use Case
interpolate (default) Linear interpolation between nearest values Best for smooth data
hourly_mean Historical mean for hour × day-type Contact centers with strong intraday patterns
forward_fill Use most recent non-missing value Event-driven patterns
flag_only No filling - just mark gaps Debugging/analysis

Linear Interpolation Algorithm

// For a missing hour H:
1. Find nearest valid value BEFORE H (prevValue)
2. Find nearest valid value AFTER H (nextValue)
3. Fill with: Math.round((prevValue + nextValue) / 2)
4. If only one exists, use that value
5. If neither exists, use 0

4. BU Classification (Volatility)

Business Units are classified based on Coefficient of Variation (CV) calculated from daily call volumes.

Formula: CV = standard_deviation / mean

Classification Thresholds

Classification CV Range Volatility Model Recommendation
Stable CV ≤ 0.25 Low SARIMA, Holt-Winters
Balanced 0.25 < CV ≤ 0.5 Medium Prophet, SARIMA
Volatile CV > 0.5 High Gradient Boosting, Ensemble

Additional Classification Factors

Factor Thresholds Impact on Model Selection
Volume Category tiny (<10/day), small (10-50), medium (50-500), high (500+) Determines model complexity
Data Density non_zero_intervals / total_intervals < 30% triggers intermittent detection
Active Day Ratio days_with_calls / total_days < 70% triggers intermittent detection
Weekly Seasonality Between-group variance / total variance > 0.3 indicates strong seasonality

5. Model Selection Logic

The IntelligentAutoSelector uses a decision tree for automatic model selection.

IF not enough data (< 7 days):
  → Holt-Winters (most robust)
ELSE IF intermittent AND tiny volume:
  → Croston's Method
ELSE IF tiny OR small volume:
  IF poor history: → Holt-Winters
  ELSE: → SARIMA
ELSE IF medium volume:
  IF strong weekly seasonality: → SARIMA
  ELSE IF good history: → Prophet
  ELSE: → Holt-Winters
ELSE (high volume):
  IF high volatility: → Gradient Boosting
  ELSE IF long history: → Prophet
  ELSE: → Gradient Boosting

Simplified Model Names (Enterprise Feature)

Technical Name Simple Display Detailed Display
crostons Low Volume Croston's Method (Intermittent Demand)
holt_winters Standard Holt-Winters (Exponential Smoothing)
sarima Seasonal SARIMA (Seasonal Auto-Regression)
prophet Advanced Prophet-style (Trend + Seasonality)
gradient_boosting ML-Based Gradient Boosting (Machine Learning)
ensemble Combined Ensemble (Multi-Model Blend)

6. Data Quality Score

The preprocessing pipeline outputs a composite quality score.

Component Weight Calculation
Completeness 35% Valid row ratio (70%) + non-missing ratio (30%)
Accuracy 30% 100 - (outlier_ratio × 50) - (severe_outliers × 30)
Consistency 20% 100 - duplicate_ratio × 100
Timeliness 15% 100 - (days_since_latest_data × 5)

7. Holiday Impact

Multi-country holiday calendars that affect forecasting.

Supported Countries (17 Total)

Region Countries
North America US (United States), CA (Canada), MX (Mexico)
Central America GT, HN, SV, CR, NI, PA
South America CO, PE, CL, AR, EC, VE
Europe GB (United Kingdom), ES (Spain)

Static Impact Factors (Default)

Distance from Holiday Factor Applied
On holiday 0.3 (70% reduction)
1 day before 0.6-0.9 (gradual)
1 day after 0.5-0.9 (gradual)

Quick Reference - Configuration Options

preprocessData(rawData, {
  missingValueMethod: 'interpolate',  // 'interpolate' | 'forward_fill' | 'flag_only'
  outlierMethod: 'iqr',               // 'iqr' | 'zscore'
  outlierThreshold: 1.5,              // Sensitivity (higher = fewer outliers)
  normalizeIntervals: true,           // Convert to 30-min intervals
  includeFeatures: true,              // Add ML features (lag, rolling avg, etc.)
  timezone: 'UTC'                     // Source timezone
});

Professional Scheduling Engine

WFM-grade scheduling engine with shift start optimization, wave-based scheduling, and intelligent relief placement.

Key Capabilities

Erlang A/C Models, 30/15-min Intervals, Wave-Based Starts, Smart Relief Scheduling, Constraint Relaxation, and Exception Flagging.

Shift Type Configurations (15-min Intervals)

5x8 Shifts (Standard 8-Hour)

Length: 8 hours (32 intervals) | Total relief: 60 min

Relief Duration Target Range Hard Bounds
Meal 30 min (2 intervals) Intervals 14-18 (3.5-4.5 hrs) Start ≥8, End ≤20
Break 1 15 min (1 interval) Intervals 6-9 (1.5-2.25 hrs) beforeInterval: 10
Break 2 15 min (1 interval) Intervals 23-27 (5.75-6.75 hrs) afterInterval: 20

4x10 Shifts (Extended 10-Hour)

Length: 10 hours (40 intervals) | Total relief: 70 min | Requires strict continuous work compliance

Relief Duration Target Range Hard Bounds
Meal 30 min (2 intervals) Intervals 18-22 (4.5-5.5 hrs) Start ≥12, End ≤28
Break 1 20 min (2 intervals) Intervals 8-10 (2-2.5 hrs) maxInterval: 10
Break 2 20 min (2 intervals) Intervals 29-34 (7.25-8.5 hrs) minInterval: 29, beforeInterval: 34

4x10 Continuous Work Compliance Formulas

  • Break1→Meal gap: ≤10 intervals (150 min) — meal.start ≤ break1.maxEnd + 10
  • Meal→Break2 gap: ≤10 intervals — break2.start ≤ meal.end + 10
  • Break2→End gap: ≤10 intervals — break2.end ≥ shiftLength - 10

1. Wave-Based Scheduling

Agents grouped into waves (A, B, C, D) - cohorts that start within a defined time window for easier supervision.

Max Waves

4 (Wave A through D)

Wave Spread

60 minutes max within each wave

Output Format

{
  waves: [
    { label: "Wave A", startTime: "08:00", agentCount: 12 },
    { label: "Wave B", startTime: "11:30", agentCount: 8 }
  ]
}

2. Team Cohesion Rules

Wide spread in start times creates supervision challenges. The engine enforces cohesion limits.

Metric Default Limit Strict Mode
Max Spread 30 minutes (acceptable) 15 minutes (preferred)
Max Distinct Starts 6 per team -
Preferred Distinct Starts 3-5 per team -

Auto Team Splitting: When cohesion limits are exceeded, teams are automatically split into "Early Team" and "Late Team" sub-groups.

3. No-Start Windows

By default, no shifts can start during overnight hours to protect worker wellbeing.

Window Start

00:00 (midnight)

Window End

05:00 (5 AM)

24/7 Mode

Available (disabled by default)

4. Relief & Break Scheduling

Intelligent break and meal placement using 15-minute interval granularity.

Universal Rules

Rule Value Purpose
No relief first/last 60 minutes Settle-in and wrap-up time
Max continuous work 150 minutes (2.5 hrs) Worker wellbeing
Min relief spacing 60 minutes Adequate recovery

Slot Scoring Weights

40%

Demand Avoidance

35%

Target Proximity

25%

Staffing Impact

5. Automatic Constraint Relaxation

When optimal placement isn't possible, constraints are intelligently relaxed in order.

Relief Scheduling Relaxation Levels

Level Constant Action
0 NONE Standard rules apply
1 TARGET_EXPAND Allow ±2 intervals from target window
2 MODERATE_DEMAND Allow moderate-demand intervals for relief
3 CONCURRENCY Temporary concurrency override (last resort)

Shift Start Time Relaxation Levels

Level Constant Action
0 NONE Standard 30-min increments
1 ALLOW_15_MIN Use 15-min increments
2 EXPAND_DISTINCT Allow up to 8 distinct starts
3 EXPAND_SPREAD Allow up to 60-min spread (absolute max)
4 TEAM_SPLIT Split into sub-teams

Immutable Constraints (NEVER Relaxed)

These constraints are enforced for labor law compliance and cannot be relaxed:

Constraint Constant Enforcement
No relief first 60 min noReliefFirstIntervals isInWorkingZone()
No relief last 60 min noReliefLastIntervals isInWorkingZone()
Relief duration reliefDuration SHIFT_CONFIGS
Max continuous work (150 min) maxContinuousWork validateImmutableConstraints()

Immutable Validation Function

validateImmutableConstraints(schedule)
// Returns: { valid: boolean, errors: string[] }

// If validation fails:
// - schedule.isValid forced to false
// - Violations added to schedule.violations[]
// - Logged with severity: 'critical'

6. Erlang A/C Models

Industry-standard formulas for calculating agent requirements.

Erlang C (No Abandonment)

Assumes customers wait indefinitely. Standard formula for most contact centers.

Erlang A (With Abandonment)

Accounts for customer abandonment. More realistic for high-volume centers.

Traffic Intensity Calculation

trafficIntensity = (callsPerInterval * ahtSeconds) / intervalSeconds
// Example: (50 calls * 300 sec AHT) / 1800 sec = 8.33 Erlangs

Schedule Generation API

POST /api/schedule/generate

{
  "business_unit_id": 42,
  "date": "2026-01-20",
  "erlang_model": "erlangA",
  "service_level_target": 0.80,
  "target_answer_time": 20,
  "shrinkage": 0.30,
  "shift_type": "5x8"
}

Best Practices

Team Cohesion Guidelines

  • Small team (<10): 2-3 distinct starts
  • Medium team (10-30): 3-4 distinct starts
  • Large team (30+): Accept splits if needed

Relief Priority Order

  • 1. Never violate continuous work rule
  • 2. Place meal in target window
  • 3. Space breaks evenly
  • 4. Avoid peak demand intervals

Full documentation: SCHEDULING.md

Payroll Export

Generate ADP-compatible CSV exports for biweekly payroll processing. Converts approved time exceptions into standardized files compatible with major payroll systems.

Supported Payroll Systems

ADP, Paychex, UKG (Kronos), Workday, and Gusto — all use standardized CSV format with configurable mappings.

Key Features

Pay Period Management

  • • Weekly, biweekly, semi-monthly, monthly
  • • Bulk period generation (up to 26 at once)
  • • Status tracking: Open → Processing → Exported → Closed

Time Rounding

  • • 1, 5, 6, 10, or 15-minute increments
  • • Direction: Nearest, Up, or Down
  • • ADP default: 6-minute (tenth-hour)

Employee Mapping

  • • Map to payroll system employee IDs
  • • ADP File Number support
  • • Cost center and location codes

Validation & Audit

  • • Pre-export validation catches errors
  • • Complete export history with re-export
  • • Audit trail for compliance

Earnings Codes

Code Description Maps From Type
VACVacation/PTOPTO exceptionPaid Leave
SICKSick LeaveSICK exceptionPaid Leave
HOLHoliday PayHoliday calendarPaid Leave
TRNTrainingTRAINING exceptionRegular
MTGMeetingMEETING exceptionRegular
UNPUnpaid LeaveUNPAID_LEAVEUnpaid

Export Workflow

1

Configure Pay Periods

Set start/end dates, pay date, and period type (biweekly recommended)

2

Map Employee IDs

Link each employee to their payroll system identifier

3

Validate Export

Pre-flight check catches missing IDs and unmapped codes

4

Generate & Download

Export CSV and import into your payroll system

CSV Output Format

EmployeeID,FileNumber,FirstName,LastName,PayPeriodStartDate,PayPeriodEndDate,EarningsCode,Hours,Department,CostCenter,Location,Memo
10045,ABC123,"John","Smith",2026-01-06,2026-01-19,VAC,16.00,"Customer Service",CC-001,NYC-01,"Annual leave"
10045,ABC123,"John","Smith",2026-01-06,2026-01-19,SICK,8.00,"Customer Service",CC-001,NYC-01,"Doctor appointment"
10046,ABC124,"Jane","Doe",2026-01-06,2026-01-19,VAC,24.00,"Sales",CC-002,NYC-02,"Holiday travel"

Full documentation: PAYROLL_EXPORT.md — Complete API reference, database schema, and troubleshooting guide