How solving hardware limitations in software took a night-patrol robot from prototype to CE-assessable
10 min read | Case Study
Bearcover builds a 90cm-tall autonomous robot that patrols nursing homes and clinics at night, checking on patients through closed doors using UWB radar. It's a product that hooks you immediately: a robot that can detect human movement through walls, without cameras, without waking anyone up.
When I first got involved, the ask was straightforward. The company needed a CE compliance advisor. They had a working prototype, real deployments in care facilities, and a product that solved a genuine problem. What they needed was someone to guide them through the safety certification process so they could scale.
That's not what happened.
Within the first few weeks, it became clear that CE compliance wasn't a documentation problem. You can't write safety documents for a system that isn't architecturally ready for safe deployment. The robot worked, but the gap between "works in a demo" and "certified for unsupervised operation in a clinical environment" was wider than anyone had anticipated.
The state machine controlling patrol, scanning, and docking was monolithic. Sensor field of view was limited, leading to collisions and localization loss events. The wheels would slip on smooth nursing home floors, corrupting the odometry and causing the robot to lose its position. There was no systematic approach to safety that could satisfy a CE assessment.
So I joined the team. Not as an advisor writing documents from the outside, but as the engineering lead rebuilding the technical foundation from within.
The first thing I did was map the actual failure modes. Not the theoretical ones you list in a risk analysis, but the ones happening in the field.
The state machine was a single script. All patrol logic, scanning behavior, docking sequences, and failure handling lived in one monolithic file. When something went wrong during a patrol, diagnosing the cause meant reading through hundreds of lines of interleaved state transitions. Testing individual behaviors in isolation was impossible. Adding new features meant risking regressions everywhere.
The sensor field of view created blind spots. The robot had limited sensor coverage, and adding more sensors wasn't an option. Budget was tight, the mechanical design was fixed, and the timeline didn't allow for hardware redesign. But the limited FoV was causing real problems: the robot would collide with obstacles it couldn't see, or lose localization when it couldn't match enough features to the map.
Localization was unreliable on smooth floors. Nursing homes have long corridors with smooth, often polished floors. The Ubiquity Motors/MCB wheels would sometimes slip or spin, especially during turns. Every slip event corrupted the wheel odometry, and the robot would gradually drift from its believed position until it was effectively lost.
There was no localization health monitoring. The robot had no way to know it was lost until it hit something or failed to find its docking station. There was no path deviation check, no localization quality metric, no automatic recovery behavior.
This wasn't a case of bad engineering. It was a startup that had built something genuinely impressive with limited resources and was now hitting the ceiling of what that initial architecture could support. The path to CE wasn't more documents. It was more engineering.
The theme of every decision at Bearcover was the same: solve hardware limitations in software, because hardware changes aren't an option.
I designed and implemented a proper state machine architecture as a dedicated ROS package with custom state messages for operational control and state change services. The monolithic patrol script became separate controllers for patrol, scanning, and docking, with clean state transitions between them. Each behavior could be tested independently. Failure handling became explicit rather than buried in nested conditionals. State change services allowed the operations dashboard to command the robot remotely, something that was impossible when all the logic lived in a single tightly-coupled script.
This wasn't just about code quality. A CE assessor needs to understand your system's behavior in every state, including failure states. A monolithic script makes that nearly impossible. A well-structured state machine makes it auditable. And along with the architecture refactor, I set up CI/CD with GitHub Actions for automated testing across branches with code coverage integration, so that future changes wouldn't silently break safety-critical behaviors.
The core sensing capability of Bearcover's robot is detecting human movement through closed doors using X4M06/SLMX4 UWB radar sensors. There's no off-the-shelf solution for this. I built the perception pipeline from the ground up: DSP-based signal processing to extract meaningful features from the radar returns, a movement classifier to distinguish human presence from noise, and a publishing system that integrated the radar streams into the ROS ecosystem.
This is the kind of work that doesn't have a tutorial. You're reading datasheets, writing signal processing code, and iterating on classification thresholds while testing against real scenarios in actual care facilities.
Autonomous docking is one of those things that sounds simple until you try it. The robot needs to return to its charging station reliably, align precisely, and connect. I built a docking system using RGB camera and LiDAR sensor fusion: the camera detects visual markers on the charging station, the LiDAR provides precise distance and pose estimation, and the fusion of both gives the robot reliable alignment even in varying lighting conditions.
The wheel slip problem couldn't be solved mechanically. Instead, I integrated an IMU with the wheel odometry from the Ubiquity Motors/MCB base. When the wheels slip or spin, the IMU provides an independent rotation and acceleration signal that can detect and compensate for the corrupted wheel data. The fused odometry was significantly more reliable on the smooth floors that had been causing persistent problems.
Rather than just improving localization, I built a monitoring layer on top of it. Path deviation checks compare the robot's believed trajectory against what's physically plausible. Localization quality metrics track how well the robot's sensor readings match its map. A robot-lost detection system triggers automatic recovery behaviors before the robot ends up in a dangerous situation.
This monitoring infrastructure also feeds into a real-time operational dashboard I built, giving operators visibility into every robot's localization health, patrol progress, and system status.
Care facilities have areas a robot should never enter: medication rooms, staff areas, certain patient rooms. I built a keepout zone system using polygon-based regions with an interactive visual waypoint editor for creating and editing patrol routes, including rotation controls, docking pose setup, and remote editing capabilities so operators could adjust routes without being on-site. A "Where Am I" service provides real-time location awareness, and the keepout zones prevent the robot from entering restricted areas regardless of what the navigation planner suggests. This is also a CE requirement: you need to demonstrate that the robot has spatial boundaries it cannot violate, and those boundaries need to be configurable by operators without touching code.
The full sensor suite includes a RealSense D435 depth camera, RPLiDAR A2M12, UWB radar sensors, an IMU, and the Ubiquity Motors/MCB base, all integrated through ROS. Getting these sensors to work together reliably, with proper calibration and time synchronization, is the kind of unglamorous engineering that makes everything else possible. It's also the kind of work that's easy to underestimate from the outside. Each sensor has its own coordinate frame, update rate, failure modes, and quirks. Integration means handling all of that gracefully, not just in the happy path, but when a sensor drops out at 3 AM during a patrol.
The cumulative effect of these changes was significant. Collision events and localization loss events dropped substantially. I won't fabricate specific numbers, but the improvement was large enough that operators noticed it immediately.
More importantly for the business, the architecture was now CE-assessable. The state machine had explicit, documented behavior in every state. Safety-critical functions had monitoring and fallback behaviors. The keepout zone system provided a verifiable spatial safety layer. I produced the CE-focused technical documentation, risk analysis, and architectural evidence that the certification process requires.
The operational dashboard gave the team real-time visibility into deployed robots, turning what had been a "hope it works tonight" situation into a monitored, measurable operation.
What started as "we need a CE advisor" became "we have a CE-ready technical foundation." The robot went from a promising prototype to a system that could credibly be assessed for unsupervised deployment in clinical environments.
If there's one pattern I see repeatedly in robotics startups, it's this: founders assume the gap between prototype and production is mostly about documentation, certification paperwork, and maybe some testing. In reality, it's almost always an engineering gap. Your prototype works because a skilled engineer is watching it and intervening when things go wrong. Production means the robot has to handle every failure mode on its own, and your architecture needs to support that.
The other pattern: when resources are limited, your instinct is to add hardware to solve hardware problems. More sensors for better coverage, better wheels for less slip, additional computing for faster processing. Sometimes that's the right call. But often, the faster and cheaper path is solving hardware limitations in software. IMU fusion instead of better wheels. Localization monitoring instead of more LiDAR units. Keepout zones instead of additional bumper sensors.
This is the kind of work a fractional CTO does for robotics startups: not just advising from the outside, but identifying where the real engineering gaps are and closing them. If you're at the stage where your prototype works but production feels impossibly far away, a robotics feasibility study can map that gap concretely. And if you're preparing for investor conversations about your technical readiness, technical due diligence preparation ensures you're not caught off guard by the questions that matter.
The prototype-to-production gap is real. But it's not mysterious. It's engineering.