Add distributed lock functionality, and locked cache updates.

Review Request #14628 — Created Oct. 8, 2025 and updated

Information

Djblets
release-5.x

Reviewers

This introduces djblets.protect, a new module for service protection
capabilities, and specifically djblets.protect.locks.CacheLock, which
is a simple distributed lock utilizing the cache backend. This can help
avoid cache stampede issues, and overall reduce the work required by a
service.

Locks have an expiration, and consumers can block waiting on a lock to
be available or return immediately, giving control over how to best
utilize a lock.

Locks are set by performing an atomic add() with a UUID4. If the value
is added, the lock is acquired. If it already exists, the lock has to
either block waiting or return a result. Waiting supports a timeout and
a time between retries.

Locks are released when they expire or (ideally) when release() is
called.

When using a lock as a context manager, both acquiring and releasing the
lock is handled automatically.

The interface is designed to be largely API-compatible with
threading.Lock and similar lock interfaces, but with more flexibility
useful for distributed lock behavior.

A pattern I expect to be common will be to lock a cache key when
calculating state to store and then writing it, which may be expensive
(for instance, talking to a remote service and storing the result).

For this, cache_memoize() and cache_memoize_iter() have been updated
to work with locks. They now take a lock= argument, which accepts a
CacheLock with the parameters controlling the lock behavior. If
provided, the lock will be acquired if the initial fetch doesn't yield a
value. A second fetch is then attempted (in case it had to wait for
another process to finish), and if it still needs to compute data to
cache, it will do so under the protection of the lock, releasing when
complete.

Locks are entirely optional and not enabled by default for any current
caching behavior, but are something we'll likely want to opt into any
time we're working on caching something that's expensive to generate.

Unit tests pass.

Summary ID
Add distributed lock functionality, and locked cache updates.
This introduces `djblets.protect`, a new module for service protection capabilities, and specifically `djblets.protect.locks.CacheLock`, which is a simple distributed lock utilizing the cache backend. This can help avoid cache stampede issues, and overall reduce the work required by a service. Locks have an expiration, and consumers can block waiting on a lock to be available or return immediately, giving control over how to best utilize a lock. Locks are set by performing an atomic `add()` with a UUID4. If the value is added, the lock is acquired. If it already exists, the lock has to either block waiting or return a result. Waiting supports a timeout and a time between retries. Locks are released when they expire or (ideally) when `release()` is called. When using a lock as a context manager, both acquiring and releasing the lock is handled automatically. The interface is designed to be largely API-compatible with `threading.Lock` and similar lock interfaces, but with more flexibility useful for distributed lock behavior. A pattern I expect to be common will be to lock a cache key when calculating state to store and then writing it, which may be expensive (for instance, talking to a remote service and storing the result). For this, `cache_memoize()` and `cache_memoize_iter()` have been updated to work with locks. They now take a `lock=` argument, which accepts a `CacheLock` with the parameters controlling the lock behavior. If provided, the lock will be acquired if the initial fetch doesn't yield a value. A second fetch is then attempted (in case it had to wait for another process to finish), and if it still needs to compute data to cache, it will do so under the protection of the lock, releasing when complete. Locks are entirely optional and not enabled by default for any current caching behavior, but are something we'll likely want to opt into any time we're working on caching something that's expensive to generate.
c181dc802fc56790d92b3494a6790ec809bdfce8
chipx86
chipx86
Review request changed
Change Summary:

Reworked some of the API to be compatible with threading.Lock and similar.

Description:
   

This introduces djblets.protect, a new module for service protection

    capabilities, and specifically djblets.protect.locks.CacheLock, which
    is a simple distributed lock utilizing the cache backend. This can help
    avoid cache stampede issues, and overall reduce the work required by a
    service.

   
   

Locks have an expiration, and consumers can block waiting on a lock to

    be available or return immediately, giving control over how to best
    utilize a lock.

   
   

Locks are set by performing an atomic add() with a UUID4. If the value

    is added, the lock is acquired. If it already exists, the lock has to
    either block waiting or return a result. Waiting supports a timeout and
    a time between retries.

   
   

Locks are released when they expire or (ideally) when release() is

    called.

   
   

When using a lock as a context manager, both acquiring and releasing the

    lock is handled automatically.

   
  +

The interface is designed to be largely API-compatible with

  + threading.Lock and similar lock interfaces, but with more flexibility
  + useful for distributed lock behavior.

  +
   

A pattern I expect to be common will be to lock a cache key when

    calculating state to store and then writing it, which may be expensive
    (for instance, talking to a remote service and storing the result).

   
   

For this, cache_memoize() and cache_memoize_iter() have been updated

    to work with locks. They now take a lock= argument, which accepts a
    CacheLock with the parameters controlling the lock behavior. If
    provided, the lock will be acquired if the initial fetch doesn't yield a
    value. A second fetch is then attempted (in case it had to wait for
    another process to finish), and if it still needs to compute data to
    cache, it will do so under the protection of the lock, releasing when
    complete.

   
   

Locks are entirely optional and not enabled by default for any current

    caching behavior, but are something we'll likely want to opt into any
    time we're working on caching something that's expensive to generate.

Commits:
Summary ID
Add distributed lock functionality, and locked cache updates.
This introduces `djblets.protect`, a new module for service protection capabilities, and specifically `djblets.protect.locks.CacheLock`, which is a simple distributed lock utilizing the cache backend. This can help avoid cache stampede issues, and overall reduce the work required by a service. Locks have an expiration, and consumers can block waiting on a lock to be available or return immediately, giving control over how to best utilize a lock. Locks are set by performing an atomic `add()` with a UUID4. If the value is added, the lock is acquired. If it already exists, the lock has to either block waiting or return a result. Waiting supports a timeout and a time between retries. Locks are released when they expire or (ideally) when `release()` is called. When using a lock as a context manager, both acquiring and releasing the lock is handled automatically. A pattern I expect to be common will be to lock a cache key when calculating state to store and then writing it, which may be expensive (for instance, talking to a remote service and storing the result). For this, `cache_memoize()` and `cache_memoize_iter()` have been updated to work with locks. They now take a `lock=` argument, which accepts a `CacheLock` with the parameters controlling the lock behavior. If provided, the lock will be acquired if the initial fetch doesn't yield a value. A second fetch is then attempted (in case it had to wait for another process to finish), and if it still needs to compute data to cache, it will do so under the protection of the lock, releasing when complete. Locks are entirely optional and not enabled by default for any current caching behavior, but are something we'll likely want to opt into any time we're working on caching something that's expensive to generate.
5ad002a1428c8244e6a2e92d94a3e93f8d952c50
Add distributed lock functionality, and locked cache updates.
This introduces `djblets.protect`, a new module for service protection capabilities, and specifically `djblets.protect.locks.CacheLock`, which is a simple distributed lock utilizing the cache backend. This can help avoid cache stampede issues, and overall reduce the work required by a service. Locks have an expiration, and consumers can block waiting on a lock to be available or return immediately, giving control over how to best utilize a lock. Locks are set by performing an atomic `add()` with a UUID4. If the value is added, the lock is acquired. If it already exists, the lock has to either block waiting or return a result. Waiting supports a timeout and a time between retries. Locks are released when they expire or (ideally) when `release()` is called. When using a lock as a context manager, both acquiring and releasing the lock is handled automatically. The interface is designed to be largely API-compatible with `threading.Lock` and similar lock interfaces, but with more flexibility useful for distributed lock behavior. A pattern I expect to be common will be to lock a cache key when calculating state to store and then writing it, which may be expensive (for instance, talking to a remote service and storing the result). For this, `cache_memoize()` and `cache_memoize_iter()` have been updated to work with locks. They now take a `lock=` argument, which accepts a `CacheLock` with the parameters controlling the lock behavior. If provided, the lock will be acquired if the initial fetch doesn't yield a value. A second fetch is then attempted (in case it had to wait for another process to finish), and if it still needs to compute data to cache, it will do so under the protection of the lock, releasing when complete. Locks are entirely optional and not enabled by default for any current caching behavior, but are something we'll likely want to opt into any time we're working on caching something that's expensive to generate.
c181dc802fc56790d92b3494a6790ec809bdfce8

Checks run (2 succeeded)

flake8 passed.
JSHint passed.