Task-Boundary Mode How task-instance boundaries are drawn from the event stream. Applies to every Task SoP, Step SoP, and Variants view.
By Case Type
SAE / SUSAR / Initial / Follow-up comparison
Benchmark Measurement This finding is an observation that informs strategy — not a direct savings opportunity. It measures workforce behavior and AI readiness to guide where the savings-producing findings will have the greatest impact.

Description & Data Evidence

Case cycle times (wall-clock) vary significantly by type: 'Initial AE' averages 39h vs 'Unknown' at 0h (6744.4x). Overall: avg 25h (median 1h). Focused interaction (dwell) accounts for 0.0% of wall-clock (avg 0.0h dwell in 25h elapsed per case). Note: dwell is only captured for click/browser events (~19% of events); task-mining events contribute to event counts but not dwell.

Self-Evaluation Scores

The platform grades each finding on four dimensions (1–5 scale). Low scores flag findings that need more data or clearer remediation before acceptance.

Overall 4/5
Actionability 3/5
Specificity 4/5
Remediation Alignment 3/5

Key Findings

  • 'Initial AE': 39h avg vs 'Unknown': 0h avg (6744.4x)
  • Overall: avg 25h cycle time (wall-clock), median 1h
  • Focused interaction: 0.0% of wall-clock (avg 0.0h dwell per case)
  • 960 cases analyzed across 9 case types
  • Cycle time = wall-clock elapsed; active = focused dwell (click/browser events only)

Remediation Ideas

  • Case-type-specific SLA targets based on complexity
  • Fast-track workflow for simpler case types (Initial AE)
  • Predictive analytics to flag cases likely to exceed SLA
  • Standardized templates and checklists per case type to reduce variability