Nest × Yale Lock
On early builds of Nest’s first connected lock, a quick tap in the app would throw the bolt. It felt fast in the lab, but in trial homes a stray hand could lock the door by accident.
I was the product designer embedded with the data-integration engineering team responsible for the interaction model of that lock, built with Yale. My scope ran from the on-device control to the state and error model, the one-line history, Privacy Mode, and the instrumentation we needed to verify results rather than assume them.
From the outset, the lock had to treat access as a deliberate act. We framed it as a decision system: ask for intent when risk is high, act directly when it’s safe, and always tell people what happened and what to do next. Familiarity mattered; no one wants to learn a new ritual at the door, and we knew most people wouldn’t read release notes.
The first decision was to slow the primary control down just enough to make intent clear. We replaced the quick tap with a press-and-hold and made progress visible. A circular ring, echoing the Nest thermostat dial, fills as the motor prepares to move, so the motion feels native without instruction. I prototyped timing, easing, and haptics until the ring and the motor aligned, then wrote the first pass of the microcopy, which our copywriter later tightened.


With the control settled, language had to carry the same discipline across surfaces. Success stays to a single line you can catch at a glance. Failure states tell the truth and point to recovery. Offline is stated plainly, not euphemized. The device and the app use the same phrasing, so moving between them does not feel like switching systems:
“Locked.” “Can’t lock. Door is obstructed. Try again, or check the bolt.” “Very low battery. What to do.”


Underneath, I standardized states to three levels—OK, Warning, Critical—and tied each to a plain label and next step, mapped directly to firmware codes and support content.
History needed to serve two different readers: the people living with the lock every day and the people supporting it when something goes wrong. We separated what people read from what the system records. The interface presents a one-line entry for ordinary use, such as “Locked by Lori at 9:28 AM,” while the event log preserves full detail for audit and support: device identifiers, keypad-code metadata, and firmware status at the moment of action. The split keeps review quick in a shared home and leaves a complete trail when depth is required.






Privacy Mode needed clarity before and after activation. When a home turns it on, the keypad stays off until someone turns it back on. The activation flow explains the effect in one short screen, the active state is visible on the device and in the app, and history records each change. You can see not just that the mode exists, but when it was enabled and by whom.
We liked to say we designed for each other, but evidence, not rhetoric, guided the build. With our data scientists I defined events and signals we could trust: first-attempt success for lock and unlock; a quick toggle-back proxy to detect accidental triggers in early trials; funnels from error to help to resolved, with time-to-healthy. We added coverage for named events and guest-code use. As guardrails, we tracked support contacts per thousand devices, false tamper alerts, and any latency or battery cost introduced by the ring or confirmations.
Over time, the system proved durable and held its shape. The control asks for intent and shows progress. Language stays consistent wherever you see it. The interface offers a clear, human history while the system keeps a complete log. Privacy Mode is explicit and hard to mistake. Together, the pieces reduce uncertainty at the door and keep recovery close at hand when something goes wrong. ♦


On early builds of Nest’s first connected lock, a quick tap in the app would throw the bolt. It felt fast in the lab, but in trial homes a stray hand could lock the door by accident.
I was the product designer embedded with the data-integration engineering team responsible for the interaction model of that lock, built with Yale. My scope ran from the on-device control to the state and error model, the one-line history, Privacy Mode, and the instrumentation we needed to verify results rather than assume them.
From the outset, the lock had to treat access as a deliberate act. We framed it as a decision system: ask for intent when risk is high, act directly when it’s safe, and always tell people what happened and what to do next. Familiarity mattered; no one wants to learn a new ritual at the door, and we knew most people wouldn’t read release notes.
The first decision was to slow the primary control down just enough to make intent clear. We replaced the quick tap with a press-and-hold and made progress visible. A circular ring, echoing the Nest thermostat dial, fills as the motor prepares to move, so the motion feels native without instruction. I prototyped timing, easing, and haptics until the ring and the motor aligned, then wrote the first pass of the microcopy, which our copywriter later tightened.


With the control settled, language had to carry the same discipline across surfaces. Success stays to a single line you can catch at a glance. Failure states tell the truth and point to recovery. Offline is stated plainly, not euphemized. The device and the app use the same phrasing, so moving between them does not feel like switching systems:
“Locked.” “Can’t lock. Door is obstructed. Try again, or check the bolt.” “Very low battery. What to do.”


Underneath, I standardized states to three levels—OK, Warning, Critical—and tied each to a plain label and next step, mapped directly to firmware codes and support content.
History needed to serve two different readers: the people living with the lock every day and the people supporting it when something goes wrong. We separated what people read from what the system records. The interface presents a one-line entry for ordinary use, such as “Locked by Lori at 9:28 AM,” while the event log preserves full detail for audit and support: device identifiers, keypad-code metadata, and firmware status at the moment of action. The split keeps review quick in a shared home and leaves a complete trail when depth is required.






Privacy Mode needed clarity before and after activation. When a home turns it on, the keypad stays off until someone turns it back on. The activation flow explains the effect in one short screen, the active state is visible on the device and in the app, and history records each change. You can see not just that the mode exists, but when it was enabled and by whom.
We liked to say we designed for each other, but evidence, not rhetoric, guided the build. With our data scientists I defined events and signals we could trust: first-attempt success for lock and unlock; a quick toggle-back proxy to detect accidental triggers in early trials; funnels from error to help to resolved, with time-to-healthy. We added coverage for named events and guest-code use. As guardrails, we tracked support contacts per thousand devices, false tamper alerts, and any latency or battery cost introduced by the ring or confirmations.
Over time, the system proved durable and held its shape. The control asks for intent and shows progress. Language stays consistent wherever you see it. The interface offers a clear, human history while the system keeps a complete log. Privacy Mode is explicit and hard to mistake. Together, the pieces reduce uncertainty at the door and keep recovery close at hand when something goes wrong. ♦

