Add distributed lock functionality, and locked cache updates.
Review Request #14628 — Created Oct. 8, 2025 and updated
This introduces
djblets.protect
, a new module for service protection
capabilities, and specificallydjblets.protect.locks.CacheLock
, which
is a simple distributed lock utilizing the cache backend. This can help
avoid cache stampede issues, and overall reduce the work required by a
service.Locks have an expiration, and consumers can block waiting on a lock to
be available or return immediately, giving control over how to best
utilize a lock.Locks are set by performing an atomic
add()
with a UUID4. If the value
is added, the lock is acquired. If it already exists, the lock has to
either block waiting or return a result. Waiting supports a timeout and
a time between retries.Locks are released when they expire or (ideally) when
release()
is
called.When using a lock as a context manager, both acquiring and releasing the
lock is handled automatically.The interface is designed to be largely API-compatible with
threading.Lock
and similar lock interfaces, but with more flexibility
useful for distributed lock behavior.A pattern I expect to be common will be to lock a cache key when
calculating state to store and then writing it, which may be expensive
(for instance, talking to a remote service and storing the result).For this,
cache_memoize()
andcache_memoize_iter()
have been updated
to work with locks. They now take alock=
argument, which accepts a
CacheLock
with the parameters controlling the lock behavior. If
provided, the lock will be acquired if the initial fetch doesn't yield a
value. A second fetch is then attempted (in case it had to wait for
another process to finish), and if it still needs to compute data to
cache, it will do so under the protection of the lock, releasing when
complete.Locks are entirely optional and not enabled by default for any current
caching behavior, but are something we'll likely want to opt into any
time we're working on caching something that's expensive to generate.
Unit tests pass.
Summary | ID |
---|---|
c181dc802fc56790d92b3494a6790ec809bdfce8 |
- Change Summary:
-
- Moved to a new
djblets.protect
, which will be the place for other service protection code, like rate limiting. - Added to the README and codebase docs.
- Moved to a new
- Description:
-
~ This introduces
djblets.cache.locks.CacheLock
, which is a simple~ distributed lock utilizing the cache backend. This can help avoid cache ~ stampede issues, and overall reduce the work required by a service. ~ This introduces
djblets.protect
, a new module for service protection~ capabilities, and specifically djblets.protect.locks.CacheLock
, which~ is a simple distributed lock utilizing the cache backend. This can help + avoid cache stampede issues, and overall reduce the work required by a + service. Locks have an expiration, and consumers can block waiting on a lock to
be available or return immediately, giving control over how to best utilize a lock. Locks are set by performing an atomic
add()
with a UUID4. If the valueis added, the lock is acquired. If it already exists, the lock has to either block waiting or return a result. Waiting supports a timeout and a time between retries. Locks are released when they expire or (ideally) when
release()
iscalled. When using a lock as a context manager, both acquiring and releasing the
lock is handled automatically. A pattern I expect to be common will be to lock a cache key when
calculating state to store and then writing it, which may be expensive (for instance, talking to a remote service and storing the result). For this,
cache_memoize()
andcache_memoize_iter()
have been updatedto work with locks. They now take a lock=
argument, which accepts aCacheLock
with the parameters controlling the lock behavior. Ifprovided, the lock will be acquired if the initial fetch doesn't yield a value. A second fetch is then attempted (in case it had to wait for another process to finish), and if it still needs to compute data to cache, it will do so under the protection of the lock, releasing when complete. Locks are entirely optional and not enabled by default for any current
caching behavior, but are something we'll likely want to opt into any time we're working on caching something that's expensive to generate. - Commits:
-
Summary ID a0bda35677806af82016bd478fffdf63c61b6267 5ad002a1428c8244e6a2e92d94a3e93f8d952c50
Checks run (2 succeeded)
- Change Summary:
-
Reworked some of the API to be compatible with
threading.Lock
and similar. - Description:
-
This introduces
djblets.protect
, a new module for service protectioncapabilities, and specifically djblets.protect.locks.CacheLock
, whichis a simple distributed lock utilizing the cache backend. This can help avoid cache stampede issues, and overall reduce the work required by a service. Locks have an expiration, and consumers can block waiting on a lock to
be available or return immediately, giving control over how to best utilize a lock. Locks are set by performing an atomic
add()
with a UUID4. If the valueis added, the lock is acquired. If it already exists, the lock has to either block waiting or return a result. Waiting supports a timeout and a time between retries. Locks are released when they expire or (ideally) when
release()
iscalled. When using a lock as a context manager, both acquiring and releasing the
lock is handled automatically. + The interface is designed to be largely API-compatible with
+ threading.Lock
and similar lock interfaces, but with more flexibility+ useful for distributed lock behavior. + A pattern I expect to be common will be to lock a cache key when
calculating state to store and then writing it, which may be expensive (for instance, talking to a remote service and storing the result). For this,
cache_memoize()
andcache_memoize_iter()
have been updatedto work with locks. They now take a lock=
argument, which accepts aCacheLock
with the parameters controlling the lock behavior. Ifprovided, the lock will be acquired if the initial fetch doesn't yield a value. A second fetch is then attempted (in case it had to wait for another process to finish), and if it still needs to compute data to cache, it will do so under the protection of the lock, releasing when complete. Locks are entirely optional and not enabled by default for any current
caching behavior, but are something we'll likely want to opt into any time we're working on caching something that's expensive to generate. - Commits:
-
Summary ID 5ad002a1428c8244e6a2e92d94a3e93f8d952c50 c181dc802fc56790d92b3494a6790ec809bdfce8