Add manual run and retry for Review Bot tools.
Review Request #11130 — Created Aug. 10, 2020 and submitted
This change adds the ability for Review Bot to run tools manually. This
is used in two ways:
- Tool configurations can now be set to only be run manually. In this
case, the tool will create a status update in the NOT_YET_RUN state. - All tool configurations will mark the status update as retryable,
which will allow the trigger of a retry in the case of error or
timeout states.
Based on work by Alex Klemenchuk at /r/10316
- Created a configuration set to only run manually. Verified that the
initial status update was correctly created in the NOT_YET_RUN state,
and that clicking the "Run" button triggered a tool run and subsequent
update. - Triggered the run of a tool not normally set to run manually but
without running the worker process (resulting in a timeout). Saw that
the "Retry" button was shown and that clicking it re-ran the tool. - Tested that the correct diffset was chosen when running and retrying
tools.
Description | From | Last Updated |
---|---|---|
This needs some work to gate off the Run Manually capabilities if not present in Review Board. We don't want … |
chipx86 | |
_get_matching_configs() is a generator, so the if check and len and [0] below don't seem like they'd work? |
chipx86 | |
It's apparent in retrospect, but (and this is probably me being tired) it took too long to realize why we … |
chipx86 | |
Probably worth tracking this in Asana with some info on what's wrong. |
chipx86 | |
Maybe worth doing this filtering in _get_matching_configs so we don't compute any more data than we have to? |
chipx86 | |
Got a bunch of thoughts here. It seems we can get, say, 15 configs here. So: Each time we get … |
chipx86 | |
Can we use update_fields= here? |
chipx86 |
- Change Summary:
-
Wire off new functionality for old RB versions.
- Commit:
-
59bfcfb56b861456587330aad859874023a9e1e9d4a10f39c06b4d85d5e118856ac4cab57e303e41
Checks run (2 succeeded)
-
-
It's apparent in retrospect, but (and this is probably me being tired) it took too long to realize why we were checking this and bailing. Can we add a comment saying that we're skipping status updates performed by other scripts or extensions?
-
-
Maybe worth doing this filtering in
_get_matching_configs
so we don't compute any more data than we have to? -
Got a bunch of thoughts here.
It seems we can get, say, 15 configs here. So:
- Each time we get one, we're setting some of the same state on
status_update
and then saving it, which ideally we wouldn't do (too many hits to the database, unnecessary timestamp updates). - We're also fetching the diff each time. This is going to be the same diff every time, but we're pulling from the database each time.
- We're also setting up a session using
extension.login_user()
above, even if we have zero matching configs returned.
I might be wrong, but I think a better approach would be to have
_get_matching_configs()
to do the filtering (as mentioned in a prior comment) and then:if matching_configs: repository = ... changedesc = ... # Fetch the diffset here. try: ... except DiffSet.DoesNotExist: ... return session = extension.login_user() ... status_update.description = ... status_update.state = ... status_update.timestamp = ... status_update.save(update_fields=('description', 'state', 'timestamp')) for ... in matching_configs: queue = ... extension.celery.send_task(...)
- Each time we get one, we're setting some of the same state on
-
- Bugs:
-
- Commit:
d4a10f39c06b4d85d5e118856ac4cab57e303e4148ce7b29496da10dc4caf282d85a5129b2cbd83e- Diff:
Revision 3 (+218 -65)
Checks run (2 succeeded)
flake8 passed.JSHint passed.