Fable Engineering #5 — Why Consumer Robotics Is a Product Design Problem
Picking the right problems—and the right body—to unlock home robots with expressive, non-anthropomorphic design.
by Fable Engineering
TL;DR
Consumer robotics won’t break through on more torque or more FLOPS. The unlock is product design—choosing the right problems and pairing them with the right form factor so people actually want these things living with them. Fresh HRI research (including the ELEGNT study) shows that when robots move with expressive intent (not just functional efficiency) people engage more and perceive them as warmer and more capable. That’s our north star. (arXiv)
Gave a lecture at Berkeley this week about the future of embodied AI and how we plan to solve the challenges ahead.
1) A quick scene from our bench
Same brain, same sensors, same basic compute. We put that stack into two bodies:
a perched, lamp-like object that can nod, lean, and “look” toward you;
a low rolling puck that glides to where it’s needed.
Both complete the task. But only one feels invited into your day. One gets spontaneous waves from kids, draws glances in quiet moments, and communicates “I’m attending” without a word. The other… just works.
That gap isn’t engineering. It’s imagination—the craft of making an object feel present, legible, and worth keeping on the table.
Illustration of the design space for expressive robot movements, including kinesics and proxemics movement primitives. SOURCE: (arXiv)
2) It’s not (just) an engineering problem. It’s an imagination problem.
We already have motors, sensors, and onboard AI that are “good enough” for meaningful value at home. The bottleneck is deciding what to build and how it should live with people. The next leap won’t be a spec bump; it’ll be a design leap: proportion, presence, pacing, and the quiet ways a robot signals intention.
Call it the moat of desire. Not “can it do the task?” but “do I want it here?” That’s product design.
Visual research I’m doing everyday to crack the embodiment challenge.
3) Our dual search: problems × embodiment
We’re running two searches in parallel:
A. Jobs-to-Be-Done (JTBD)
Which daily moments are both valuable and felt? Think: co-play and creativity, gentle coaching and focus, bedtime wind-down, ambient guidance around the house.
B. Embodiment (form factor)
What shape makes that job feel natural? Perched desktop companion, shelf-mounted “periscope,” small wheeled base, wall-docked sentinel, or a portable totem you can place where attention is needed.
We map these in a JTBD × Embodiment matrix and mark each cell as fit / stretch / mismatch. The sweet spot is the intersection of capability × context × felt appropriateness. If the shape argues with the job, the product will forever feel off—even if it “works.”
4) HRI is a design material, not an afterthought
Humans constantly broadcast and read internal states through posture, gesture, and gaze. We do it consciously and—more often—automatically. Robots don’t get a pass here. If a robot moves without readable intent, we either ignore it or mistrust it. If it moves with clear intention and attention, we lean in. (Social Robotics Lab)
That’s why studies like ELEGNT resonate with us. The researchers built a lamp-like, non-anthropomorphic robot and compared expression-driven movements (designed to communicate attention, intention, and emotion) to function-driven ones (optimized for task/time). Across six scenarios, expression-driven movement significantly increased user engagement and perceived robot qualities, especially in social tasks. In other words, people don’t only judge what the robot did; they care how it moved. (arXiv)
A complementary lens from recent work: two behavioral axes: arousal (movement energy) and attention (how selectively the robot orients to the user) shape how people perceive non-humanoid robots. High attention with moderate arousal tends to read as competent, warm, and trustworthy; high arousal alone can feel unsettling. This matches what we see in the lab: small “I’m with you” movements beat big, busy ones. (arXiv)
Robots should not only move to fulfill functional goals and constraints, i.e., robot moving from the initial state to goal state through a shortest, feasible trajectory (function-driven trajectory), but also use movements to express its internal states to human counterparts during the interaction, i.e., via expression-driven trajectory to express robot’s intention, attention, attitude, and emotions. SOURCE: (arXiv)
5) Form-factor heuristics we’re stress-testing
These are the questions we keep next to the CAD window and on the bench:
Room presence: Can it disappear when idle and be felt when needed?
Expressive surface area: What parts (head, neck, ring light, gimbal, chassis tilt) communicate attention without speech?
Placement rituals: Where does it live (desk, shelf, wall, dock)? What’s its natural “home pose”?
Trust signals: Smooth approach, respectful stopping distance, crisp return-to-idle, gentle gaze shifts, and materials that feel calm rather than gadgety.
Care/repair: Replaceable wear parts, sane cabling, and covers you can open without fear.
We’re not searching for “the” robot form. We’re searching for a few honest bodies that make certain jobs feel obvious instead of forced.
Sketching ideas of non-anthropomorphic robots with different form factors, sizes, and placements. SOURCE: (arXiv)
6) Choosing problems (and saying no)
Our criteria are simple and ruthless:
Frequency: How often does this moment happen at home?
Pain-relief + delight: Does it save time, reduce friction, or make a moment meaningfully better?
Expressive advantage: Will expressive motion materially improve the interaction vs. a phone/speaker app?
90-day shippability: Could we prototype something robust enough to test with real families in under a quarter?
What we’re not chasing: humanoid-grade chores or tasks that phones and smart speakers already nail. What we are leaning into: co-presence moments, guidance, gentle nudges, shared play where how a robot moves matters as much as what it does. That’s exactly where ELEGNT’s data shows expression moves the needle. (arXiv)
Non exhaustive list of problems we have been exploring lately.
7) This month in the lab: giving life to daily objects
Our experiments right now focus on non-anthropomorphic embodiments. Objects that don’t look like people but still read as attentive and considerate.
A few directions we’re exploring:
Perched “guide”: a desk-side object that orients to your task, offers glanceable prompts, and quietly “settles” when you’re in flow.
Docked “evening buddy”: a shelf-mounted companion that leans, nods, and signals turn-taking for family routines (reading, tidying, wind-down).
Rolling “pointer”: a small base that doesn’t approach you so much as orient you—glancing between you and the object of interest, then yielding.
In each case, we’re studying how tweaks to attention (where it “looks”) and arousal (how energetically it moves) change perception: warmth, competence, comfort, and “would you miss it if we took it away?” We’re aligning our measures to the HRI literature so we can learn apples-to-apples with prior work. (arXiv)
8) The bigger picture
We believe the first beloved home robots will feel legible, considerate, and calm. They’ll do a few jobs flawlessly and the rest of the win will come from how they inhabit your space: the pace of their motions, the way they acknowledge you, the grace with which they yield.
That’s why our work straddles two hard things—what to solve and what shape solves it—and why we treat HRI as core product design. When motion carries intention, people ascribe meaning. When form matches job, people say “this belongs here.” And when both click, a robot stops being a gadget and starts being a presence. (arXiv)
9) Want to help?
If you’re an HRI researcher, animation engineer, motion designer—or a Bay Area family curious to pilot—reach out. Tell us the daily moment you most want help with… and the shape you’d accept living with.
Thanks for reading
If this resonated, tap the ❤️ on Substack and share it with someone who cares about the future of home robots. Your feedback genuinely shapes what we build next.
Want to get involved?
Bay Area family interested in piloting? Reply to this email with “Pilot” in the subject.
HRI researchers, animation/interaction designers, roboticists—let’s jam. Reply with “Collaborate”.
Contact
Pierre-Louis Soulié — pierrelouis@fable.engineering
Or just reply directly to this newsletter—every note lands in our inbox.
See you in the next Field Note. Onward. 🚀
Notes & Sources
ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot — lamp-like prototype; user study shows expression-driven movement increases engagement and perceived qualities, especially in social tasks. (arXiv)
Social Eye Gaze in Human-Robot Interaction (Review) — why gaze and nonverbal cues matter for legibility and social perception. (Social Robotics Lab)
How Arousal and Attention Shape Human-Robot Interaction — proposes attention × arousal as key axes for designing behavior in non-humanoid robots; high attention + moderate arousal supports trust/comfort. (arXiv)







