You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As discussed with @josef-widder earlier this week and with @marbar3778 and @cmwaters yesterday, it would be helpful, both to the team and users, to have an easy way to know how different aspects of Tendermint are tested, what tests have been run, and what the results of those tests have been (especially for large-scale testnet executions, and for model-based testing).
Some of the work in #8786 touches on this, and this document for Interchain Security is a good example of the structure of the sort of document aimed for here.
The rough overall structure of such a document for Tendermint would be:
A list of all QA methods used in the project and when they're used.
A table showing concerns vs relevant QA methods for each of those concerns. This could include links to tools or parts of the codebase that perform important/substantial tests (e.g. a link to the model-based tests for the light client).
A QA log with dated entries linking to manually executed testnet/MBT/etc. results. The idea here would not be to capture results from any automated testing, as these are assumed to be available via GitHub.
The text was updated successfully, but these errors were encountered:
3. A QA log with dated entries linking to manually executed testnet/MBT/etc. results. The idea here would not be to capture results from any automated testing, as these are assumed to be available via GitHub.
While I would also not list all automated test runs here, it would still be good to capture what automation is in place, and when (or for what version/commit) it was put into place.
Of course it would also be good to note tests that had to be disabled, etc. and why.
As discussed with @josef-widder earlier this week and with @marbar3778 and @cmwaters yesterday, it would be helpful, both to the team and users, to have an easy way to know how different aspects of Tendermint are tested, what tests have been run, and what the results of those tests have been (especially for large-scale testnet executions, and for model-based testing).
Some of the work in #8786 touches on this, and this document for Interchain Security is a good example of the structure of the sort of document aimed for here.
The rough overall structure of such a document for Tendermint would be:
The text was updated successfully, but these errors were encountered: