Effective January 1, 2026, CARF accreditation standards require behavioral health organizations to document a Measurement-Informed Care procedure — including how outcome data is collected, analyzed, and used in treatment decisions. For most accredited organizations, this requirement immediately exposes a data problem that has been accumulating for years: the systems that hold the relevant data were never built to work together.
This is not a CARF-specific problem. The same gap shows up across every major reporting obligation in behavioral health — CCBHC quality measure submissions, SAMHSA block grant tables, Medicaid managed care outcomes reporting, and 42 CFR Part 2 compliance tracking. The organizations carrying the heaviest compliance reporting burden in this sector typically have the least data infrastructure to meet it. The reason is architectural, not organizational.
A behavioral health organization running 100 to 500 staff typically operates five or six disconnected systems: a clinical EHR, a billing platform, an HR and payroll system, a separate outcome measurement tool, a finance and GL system, and a collection of spreadsheets for grant and program tracking. Each was built to do its specific job. None was built to produce the cross-system, cross-period, program-level reports that compliance, reimbursement, and operational decision-making now require.
The Five Reports Your EHR Cannot Produce
These are the five reporting gaps we see most consistently across behavioral health organizations — the reports leadership needs but cannot get from their EHR, regardless of which platform they run.
1. Payer Mix by Program
Medicaid-funded services, grant-funded services, and commercial insurance carry different cost structures, compliance requirements, and documentation standards. Managing them requires visibility into payer mix at the program and service-line level — not just aggregate totals across the organization. SAMHSA's own Uniform Reporting System documentation acknowledges this: in multi-payer environments where organizations cannot reliably attribute funding source, data defaults to catch-all categories that obscure what leadership actually needs to see. EHR billing modules record transactions but are not designed to produce program-level payer breakdowns on demand. Getting this report today typically means exporting multiple files and reconciling them manually.
2. Staff Productivity and Caseload Analysis
Understanding whether clinical staff are operating at appropriate caseload levels requires joining two systems that almost never talk to each other: the EHR (which holds session counts, caseload assignments, and service dates) and the HR/payroll system (which holds hours worked, credential levels, compensation, and tenure). No major behavioral health EHR performs this join natively. Organizations that track productivity do it through parallel spreadsheets, updated inconsistently and reconciled by hand. The result: staffing decisions — among the most consequential operational decisions in behavioral health — are routinely made without reliable data.
3. Grant Deliverable Tracking
CCBHC demonstration programs require reporting on 17 or more quality measures — depression screening rates, opioid use disorder treatment initiation, patient experience scores, physical health co-morbidity management — on a calendar-year basis. Grant-funded programs more broadly face a version of the same problem: performance periods that do not align with fiscal years, EHR reporting defaults, or CMS configuration periods. Pulling grant deliverable reports requires custom date filtering, cross-system joins, and often manual reconstruction of client populations the EHR cannot reproduce accurately after the fact. Organizations that have been through a grant audit know this problem by name.
4. Denial Pattern Analysis
Behavioral health practices lose between 10 and 20 percent of potential revenue to preventable billing mistakes and claim denials. The top denial reasons — non-medical necessity accounting for 51% and inadequate documentation for 32% — are correctable, but only if an organization has visibility into which programs, payers, and service types are generating them. Most do not. Fewer than one percent of denied behavioral health claims get appealed — not because organizations do not want to pursue them, but because they cannot see which claims are worth pursuing. The billing data exists. It is locked in a transaction-processing module that was not built for pattern analysis.
5. Clinical Outcomes for Value-Based Contracts
Starting in 2024, all states must report behavioral health measures of the CMS Adult Core Set. States pass those requirements to managed care organizations, which pass them to providers. Organizations without clean outcomes data cannot participate in value-based incentive arrangements — and in markets where outcome reporting is becoming standard, they struggle to compete for contract renewals. The architecture problem: outcome measurement instruments (PHQ-9, GAD-7, OQ measures) typically live in a separate platform from the clinical EHR, managed by a different team, and exported as CSV files with no reliable pipeline to analyze them. The scores exist. They are not analyzable alongside treatment data.
The Problem Is Not Your EHR
The EHR is not failing. The absence of a data layer underneath all of your systems is the problem.
EHR vendors sell analytics add-on modules. These are expensive, limited in scope, and still require SQL expertise or vendor professional services to configure non-standard reports. More technology on top of fragmented data does not fix fragmented data — it adds another system to maintain. The organizations that have moved past manual reporting did not do it by replacing their EHR or purchasing an enterprise analytics platform. They did it by deciding the problem was worth solving, then building the right infrastructure underneath what they already had.
What a Data Layer Actually Looks Like
Most behavioral health organizations already have everything they need. The EHR exports data. The payroll system exports data. The GL exports data. The outcome measurement platform exports data. The missing piece is a lightweight warehouse layer that normalizes those exports into a common structure and a BI tool configured to ask the right questions across all of them.
In practice, this means:
- ✓A structured pipeline that pulls from each source system on a scheduled basis
- ✓A normalized data model that creates consistent definitions across systems — a "client" is the same entity in the EHR, billing, and outcomes data
- ✓A reporting layer, typically Power BI or a similar platform, that surfaces the five report categories above without manual compilation
- ✓Role-based access controls that satisfy HIPAA requirements from day one
This is not a major technology project. Organizations do not need a data engineering team or a new EHR. They need a structured implementation that takes their existing exports and connects them. Most engagements of this scope complete in weeks, not quarters.
What Changes When It's in Place
Earlier this year we completed an engagement with a behavioral health organization operating 90+ service locations across multiple regions. Their leadership team was producing reports across all five categories above — but manually, pulling from three disconnected systems every week.
The billing summary report alone accounted for a projected 570 staff-hours per month. Regional Director reporting — compiling service unit counts, productivity metrics, and census data for program leadership — added another 388 hours. Census reporting contributed 236 hours. Total projected manual reporting time across five processes: more than 1,194 hours per month.
After building the unified analytics layer, all five categories became automated. Projected annual savings in staff time: $736,920, based on average hourly cost. More durably: Regional Directors stopped waiting for reports and started using dashboards. In a post-implementation review, one regional leader said: "If the hours-worked numbers are accurate, this is a game changer."
That conditional — "if" — is what we encounter in every engagement before the data layer is in place. Building it does not just save time. It changes whether leadership trusts and uses the data.
What to Do Next
If your organization produces any of the five report categories above through manual compilation, the useful starting point is an audit: what your current stack can and cannot produce automatically, what data exists but is siloed, and what a realistic consolidation would require.
We have built this for behavioral health organizations. We know the EHR systems, the payroll integrations, the GL structures, and the compliance reporting requirements. Read the full case study to see how it worked in practice, or start with an audit assessment to understand what it would look like for yours.
