In digital.ai release in the deliveries I want to set/change/view the already completed percentage of a specific task in a specific phase.
Possible usage:
- define how many testcases are already processed (e.g. 90%)
- show the percentage of the installation (e.g. 45%)
The current possible status (SKIPPED, READY, NOT_READY) should be enhanced so that additionally we can set the current percentage of the task completed.
Values should be from 0 to 100 in integer whereas 100 COULD BE equal to the already existing status "READY"
Values should be changeable via jython scripts (or specific tasks).
Values should be changeable via user interface.
See also https://support.digital.ai/hc/requests/204858
by: Michael S. | 11 months ago | *All other
Comments
- Could you share a real-life scenario that demonstrates how you intend to use it?
- How do you decide on the suitable percentage value for a specific tracked item? Is this determination a manual process, or is it calculated using an external tool?
We're keen to explore whether a wider customer base is experiencing comparable situations, and welcome any upvotes or comments.
Usecase "How many testcases are already processed":
We have several Applications and each application consists of a bunch of deployments (=Releases). In order to test the golive-candidates we have to perform a couple of (automatic) testsuites, each suite consisting of several testcases.
I would simply set the percentage based on the already executed testsuites knowing how many testsuites are to be completed finally. I would do it via jython script (or via a task if you provide one).
Usecase "Show percentage of the installation":
We need to deploy the artefacts to several destinations (e.g. several clouds), so knowing how many stages we have to deploy and knowing how many stages are already deployed (each deployment is one jenkins-job) I can again calculate the percentage of completed deployments.
This is useful for users since currently they see either "Waiting for activity" (the clock) or "completed" (the checkmark) but they are unsure if deployment or testing really started already or still running or if there is a problem with the process or update of the ui. The first usecase can last hours and the second one up to half an hour so quicker and easier-to-understand feedback is preferable.