How Atmos Widgets Stay Clickable Without Blocking the Desktop
Learn how Atmos keeps widgets visually present on your desktop without turning them into a giant click-blocking overlay, and how interaction still works in edit mode.
How Atmos Widgets Stay Clickable Without Blocking the Desktop
Atmos widgets feel unusual in a good way.
They sit on the desktop, they look integrated into the environment, and yet they do not behave like a giant full-screen app window blocking everything underneath. At the same time, they can still be edited, moved, resized, and selected when Atmos wants them to be.
That combination is not accidental. It comes from a specific design choice in the widget engine.
This guide explains how Atmos makes that work.
The widget overlay is visual first
One of the most important hidden details in the whole widget system is that the desktop widget overlay is meant to be visual, not click-capturing.
Atmos builds a borderless overlay window to render widgets above the desktop layer, but that window is configured to ignore mouse events.
That means the overlay can stay visible without acting like a transparent shield that steals every click from the desktop.
Why this matters so much
If Atmos made the entire widget overlay behave like a normal interactive full-screen window, it would feel terrible.
The user would constantly be fighting a surface that blocked Finder, the desktop, and other interaction targets.
By keeping the overlay itself non-intercepting, Atmos lets the desktop remain usable while still showing the widgets visually.
This is one of the biggest reasons the feature feels native instead of annoying.
Hidden behavior: the widget views themselves also opt out of hit-testing
The non-blocking design is not only about the overlay window.
The widget rendering layer itself also avoids normal SwiftUI hit-testing for the desktop surface.
That means Atmos is very deliberately saying:
- show the widgets
- but do not let the rendering layer become the interaction layer
This separation is one of the most important architectural ideas in the widget system.
Atmos handles interaction separately through event monitors
If the overlay is not handling clicks directly, the obvious question is: how do widgets still work at all?
The answer is that Atmos listens for mouse events separately and interprets them against widget positions.
The engine installs:
- a global mouse monitor
- a local mouse monitor
Those monitors let Atmos watch for mouse down, drag, and mouse up activity and then decide whether the event corresponds to a widget interaction.
Global and local monitoring serve different situations
This is another hidden detail that matters.
The global monitor helps Atmos detect widget-related mouse activity even when another app or the Finder is active.
The local monitor helps when Atmos itself is the active app.
Using both lets the widget system feel more desktop-native instead of working only in one narrow interaction context.
Hidden behavior: Atmos avoids double-triggering when clicking inside its own windows
The local monitor has an important safeguard.
If the event belongs to an Atmos window, the engine does not also interpret that same click as a widget interaction underneath.
That prevents situations where clicking an Atmos control would accidentally trigger a widget sitting behind it on the desktop.
This is a great example of the system being carefully layered rather than hacked together.
Widget interaction only happens in edit mode
Even with all that event monitoring, Atmos does not treat widgets as always-editable.
The engine only processes widget manipulation when edit mode is on.
That means normal desktop use stays quiet and non-invasive, while edit mode temporarily enables the more active behavior:
- selecting widgets
- dragging widgets
- resizing widgets
- deleting widgets
This is a very important part of making the non-blocking model practical.
Hidden behavior: the engine translates screen coordinates into overlay coordinates
When the widget engine receives mouse activity, it does not work in an abstract vacuum.
It converts the real mouse location into overlay-relative coordinates so it can compare the pointer against the visual frames of widgets on screen.
This conversion step is part of what lets the system keep a click-through overlay while still knowing exactly which widget the user intended to touch.
Widget hit-testing is calculated against visual frames
Atmos tracks a visual frame for each widget based on:
- the widget’s stored position
- its natural size
- its current scale
- any live drag offset
That means widget interaction is based on the widget’s real displayed footprint, not on some simplified placeholder box detached from the actual render.
This makes edit mode feel much more precise.
Hidden behavior: interaction uses zones, not just whole-widget clicks
The widget engine does not treat every click inside a widget frame as the same action.
It also checks where inside the widget frame the click happened so it can distinguish between:
- delete zone
- resize zone
- widget body
This is how Atmos can preserve a non-blocking overlay while still giving users small, meaningful controls during edit mode.
Tap, drag, and resize are all inferred from pointer behavior
Because the overlay is not using normal button controls for all of this, Atmos infers interaction type from event timing and movement.
The engine decides whether the user intended a:
- tap
- drag
- resize gesture
based on where the pointer started and how far it moved.
This is one of the hidden reasons widget editing feels more fluid than you might expect from a click-through system.
Why the widgets still feel “clickable”
The key idea is that Atmos is not making the widget overlay interactive in the normal way.
Instead, it is making the widget system responsive to desktop mouse activity while keeping the visual layer passive.
That creates the feeling that the widgets are clickable without forcing the desktop through a traditional event-capturing overlay.
It is a clever compromise between integration and control.
Hidden behavior: widget selection can also bring the Atmos window forward
When a widget body is selected in edit mode, Atmos can activate the app and route the user back into the main window so the widget options panel becomes visible.
That means the desktop interaction and the app UI stay connected even though they are not the same surface.
This is another example of the system working as a coordinated pair rather than a single monolithic widget canvas.
The overlay sits just above the desktop layer
Atmos also places the widget overlay at a desktop-adjacent window level rather than treating it like a floating normal app window over everything.
That helps preserve the feeling that widgets belong to the desktop environment, not to a separate temporary palette.
Combined with ignored mouse events, this is a big part of the illusion.
Why this architecture is better than a naive widget window
A naive implementation might have made the overlay directly interactive and then spent the rest of its life fighting problems like:
- blocked clicks
- bad focus behavior
- Finder interference
- unexpected interaction conflicts
Atmos avoids a lot of that by splitting:
- visual presentation
- from interaction detection
That is the real trick.
What users should take away
The practical rules are:
- Atmos widgets are drawn in a desktop-style overlay
- that overlay ignores mouse events instead of blocking the desktop
- interaction still works because Atmos separately listens for mouse activity
- widget editing only becomes active in edit mode
- the engine computes which widget was targeted by comparing real mouse position to widget frames
Once you understand that model, the widget system makes a lot more sense.
Atmos widgets do not stay usable by accident. They stay usable because the app very intentionally separates what is visible from what is allowed to intercept input.
Atmos Journal
More posts, product updates, and deep dives from the team.