|
2 | 2 |
|
3 | 3 | For how to add a new test case, you should refer to following steps: |
4 | 4 |
|
5 | | -1. [Creating directories for new issue test cases](#helper-shell-script) |
6 | | -2. If the new issue test cases require new CMake build config of `warmc`/`iwasm` rather than existing ones, modify [build script](#warmc-and-iwasm-build-script) for new build config |
7 | | -3. Add [running configuration](#add-a-new-configuration-for-how-to-run-your-issue-test-case) for the new issue test cases |
8 | | -4. [Running tests and check test results](#running-test-cases-and-getting-results) |
| 5 | +- [BA Issues](#ba-issues) |
| 6 | + - [helper shell script](#helper-shell-script) |
| 7 | + - [`warmc` and `iwasm` build script](#warmc-and-iwasm-build-script) |
| 8 | + - [Add a new configuration for how to run your issue test case](#add-a-new-configuration-for-how-to-run-your-issue-test-case) |
| 9 | + - [Here is a simply running configuration that only uses `iwasm`](#here-is-a-simply-running-configuration-that-only-uses-iwasm) |
| 10 | + - [Here is a simply running configuration that uses only `wamrc`](#here-is-a-simply-running-configuration-that-uses-only-wamrc) |
| 11 | + - [Here is a simply running configuration that uses both `wamrc` and `iwasm`](#here-is-a-simply-running-configuration-that-uses-both-wamrc-and-iwasm) |
| 12 | + - [For deprecated issue test cases](#for-deprecated-issue-test-cases) |
| 13 | + - [Running test cases and getting results](#running-test-cases-and-getting-results) |
9 | 14 |
|
10 | 15 | ## helper shell script |
11 | 16 |
|
@@ -218,22 +223,57 @@ simply run `run.py` |
218 | 223 | ./run.py |
219 | 224 | ``` |
220 | 225 |
|
| 226 | +Specify a specific issue with option `--issues`/`-i` |
| 227 | + |
| 228 | +```shell |
| 229 | +./run.py --issues 2833 # test 1 issue #2833 |
| 230 | +./run.py -i 2833,2834,2835 # test 3 issues #2833 #2834 #2835 |
| 231 | +``` |
| 232 | + |
221 | 233 | If everything went well, you should see similarly output in your command line output |
222 | 234 |
|
223 | 235 | ```shell |
224 | | -Finish testing, 22/22 of test cases passed, no more issues should further test |
| 236 | +==== Test results ==== |
| 237 | + Total: 22 |
| 238 | + Passed: 22 |
| 239 | + Failed: 0 |
| 240 | + Left issues in folder: no more |
| 241 | + Cases in JSON but not found in folder: no more |
225 | 242 | ``` |
226 | 243 |
|
227 | 244 | If you add the test case under directory `issues` but forget to add the running config in json file, the output can be something like |
228 | 245 |
|
229 | 246 | ```shell |
230 | | -Finish testing, 21/21 of test cases passed, {2945} issue(s) should further test |
| 247 | +==== Test results ==== |
| 248 | + Total: 21 |
| 249 | + Passed: 21 |
| 250 | + Failed: 0 |
| 251 | + missed: 0 |
| 252 | + Left issues in folder: {3022} |
| 253 | + Cases in JSON but not found in folder: no more |
| 254 | +``` |
| 255 | + |
| 256 | +If you add the test case in `running_config.json` but used the wrong id or forget to add the test case under directory `issues`, the output can be someting like |
| 257 | + |
| 258 | +```shell |
| 259 | +==== Test results ==== |
| 260 | + Total: 21 |
| 261 | + Passed: 21 |
| 262 | + Failed: 0 |
| 263 | + missed: 0 |
| 264 | + Left issues in folder: {2855} |
| 265 | + Cases in JSON but not found in folder: {100000} |
231 | 266 | ``` |
232 | 267 |
|
233 | 268 | If some test case are failing, then it will be something like |
234 | 269 |
|
235 | 270 | ```shell |
236 | | -Finish testing, 21/22 of test cases passed, no more issue(s) should further test |
| 271 | +==== Test results ==== |
| 272 | + Total: 22 |
| 273 | + Passed: 21 |
| 274 | + Failed: 1 |
| 275 | + Left issues in folder: no more |
| 276 | + Cases in JSON but not found in folder: no more |
237 | 277 | ``` |
238 | 278 |
|
239 | 279 | And a log file named `issues_tests.log` will be generated and inside it will display the details of the failing cases, for example: |
|
0 commit comments