On April 22, 2026, bhworks experienced a brief production incident affecting participant search. The issue was detected immediately, our disaster recovery procedure was executed, and search was fully restored in under 15 minutes.
Impact
- Duration: Under 15 minutes
- Affected functionality: Participant search
- Data integrity: No participant data was lost. The search index is a derived artifact and was rebuilt from authoritative source data.
- PHI exposure: None. This was an availability incident, not a security or privacy event.
What happened
During routine maintenance work, an operator issued a command intended for one of our pre-production testing environments. Because the working environments did not visually distinguish themselves to the operator, the command executed against production instead. The affected component was the data index that powers participant search.
Response
Our disaster recovery playbook for this class of incident — which had been scheduled as a tabletop exercise on May 1 — was executed as a live incident response. Search was restored from authoritative sources in under 15 minutes with no data loss.
Remediation
Within 24 hours of the incident we deployed the following safeguards to prevent recurrence:
- Clear environment indicators. Operator sessions now display unmistakable visual cues identifying the environment, with production sessions flagged in a high-contrast warning color. These indicators persist across the full range of tools our engineers use for maintenance work.
- Closed a newer bypass of our standard confirmation step. Normal production access requires operators to pass an MFA confirmation at the start of each session. A newer administrative entry path had been adopted for operational convenience — it still uses MFA, but less frequently than the per-session model. That lower frequency inadvertently bypassed the session-start confirmation on the day of the incident. We have added an equivalent type-to-confirm check to this path so that every route into a production session now goes through the same deliberate confirmation.
What worked well
The disaster recovery playbook, developed in preparation for the May 1 exercise, performed as designed under real incident conditions. Mean time to resolution was well within our target recovery window.
Going forward
- The May 1 DR exercise will proceed, reframed around lessons captured from this live execution with the full incident response team.
- We are reviewing adjacent operator tooling for similar environment-transparency gaps.
We take full responsibility for this incident and appreciate the confidence our clients place in mdlogix. Any questions, please reach out to your account representative.