Add distributed lock functionality, and locked cache updates.

Review Request #14628 — Created Oct. 8, 2025 and updated — Latest diff uploaded

Information

Djblets
release-5.x

Reviewers

This introduces djblets.protect, a new module for service protection
capabilities, and specifically djblets.protect.locks.CacheLock, which
is a simple distributed lock utilizing the cache backend. This can help
avoid cache stampede issues, and overall reduce the work required by a
service.

It's important to note that these locks should only be used in cases
where the loss of a lock will not cause corruption or other bad
behavior. As cache backends may expire keys prematurely, and may lack
atomic operations, a lock cannot be guaranteed. These can be thought of
as a soft optimistic lock.

Locks have an expiration, and consumers can block waiting on a lock to
be available or return immediately, giving control over how to best
utilize a lock.

Locks are set by performing an atomic add() with a UUID4. If the value
is added, the lock is acquired. If it already exists, the lock has to
either block waiting or return a result. Waiting supports a timeout and
a time between retries.

When waiting, the lock will periodically check if it can acquire a new
lock, using the provided timestamp and some random jitter to help avoid
issues with stampedes where too many consumers are trying at the same
times to check and acquire a lock.

Locks are released when they expire or (ideally) when release() is
called. It's also possible they may fall out of cache, at which point
the lock is no longer valid, and suitable logging will occur.

Since there aren't atomic operations around deletes, this will try to do
release a lock as safely as possible. If the time spent with the lock is
greater than the expected expiration, it will assume the lock has
expired in cache and won't delete it (it may have been re-acquired
elsewhere). Otherwise, it will attempt to bump the expiration to
keep the key alive long enough to check it, with a worst-case scenario
that the other acquirer may have a new expiration set (likely extending
the lock). This is preferable over deleting another lock.

When using a lock as a context manager, both acquiring and releasing the
lock is handled automatically.

The interface is designed to be largely API-compatible with
threading.Lock and similar lock interfaces, but with more flexibility
useful for distributed lock behavior.

A pattern I expect to be common will be to lock a cache key when
calculating state to store and then writing it, which may be expensive
(for instance, talking to a remote service and storing the result).

For this, cache_memoize() and cache_memoize_iter() have been updated
to work with locks. They now take a lock= argument, which accepts a
CacheLock with the parameters controlling the lock behavior. If
provided, the lock will be acquired if the initial fetch doesn't yield a
value. A second fetch is then attempted (in case it had to wait for
another process to finish), and if it still needs to compute data to
cache, it will do so under the protection of the lock, releasing when
complete.

Locks are entirely optional and not enabled by default for any current
caching behavior, but are something we'll likely want to opt into any
time we're working on caching something that's expensive to generate.

Unit tests pass.

Commits

Files