Limit requests when parsing Bitbucket truncated commit webhook payloads.

Review Request #10628 — Created July 15, 2019 and submitted — Latest diff uploaded

Information

Review Board
release-3.0.x
463e362...

Reviewers

When Bitbucket has a lot of data to provide in a push event for a
webhook payload, we're responsible for querying the API in order to get
the information needed to parse commit information. This can end up
being very expensive, for two reasons:

1) There might be a very long tree to walk, and we walk all of it.

2) We might get multiple entries in the payload that, sooner or later,
refer to the same URL for walking commits.

This addresses both. We now walk no more than 5 pages worth of commits
before giving up (which should be plenty, as that's hundreds of
commits), and if we encounter a URL we've seen before, we stop walking,
since it would have already been processed.

These limits will help us keep performance reasonable and stop us from
eating through users' rate limits.

Unit tests pass.

    Loading...