1Testing in QEMU 2=============== 3 4This document describes the testing infrastructure in QEMU. 5 6Testing with "make check" 7------------------------- 8 9The "make check" testing family includes most of the C based tests in QEMU. For 10a quick help, run ``make check-help`` from the source tree. 11 12The usual way to run these tests is: 13 14.. code:: 15 16 make check 17 18which includes QAPI schema tests, unit tests, QTests and some iotests. 19Different sub-types of "make check" tests will be explained below. 20 21Before running tests, it is best to build QEMU programs first. Some tests 22expect the executables to exist and will fail with obscure messages if they 23cannot find them. 24 25Unit tests 26~~~~~~~~~~ 27 28Unit tests, which can be invoked with ``make check-unit``, are simple C tests 29that typically link to individual QEMU object files and exercise them by 30calling exported functions. 31 32If you are writing new code in QEMU, consider adding a unit test, especially 33for utility modules that are relatively stateless or have few dependencies. To 34add a new unit test: 35 361. Create a new source file. For example, ``tests/unit/foo-test.c``. 37 382. Write the test. Normally you would include the header file which exports 39 the module API, then verify the interface behaves as expected from your 40 test. The test code should be organized with the glib testing framework. 41 Copying and modifying an existing test is usually a good idea. 42 433. Add the test to ``tests/unit/meson.build``. The unit tests are listed in a 44 dictionary called ``tests``. The values are any additional sources and 45 dependencies to be linked with the test. For a simple test whose source 46 is in ``tests/unit/foo-test.c``, it is enough to add an entry like:: 47 48 { 49 ... 50 'foo-test': [], 51 ... 52 } 53 54Since unit tests don't require environment variables, the simplest way to debug 55a unit test failure is often directly invoking it or even running it under 56``gdb``. However there can still be differences in behavior between ``make`` 57invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment 58variable (which affects memory reclamation and catches invalid pointers better) 59and gtester options. If necessary, you can run 60 61.. code:: 62 63 make check-unit V=1 64 65and copy the actual command line which executes the unit test, then run 66it from the command line. 67 68QTest 69~~~~~ 70 71QTest is a device emulation testing framework. It can be very useful to test 72device models; it could also control certain aspects of QEMU (such as virtual 73clock stepping), with a special purpose "qtest" protocol. Refer to 74:doc:`qtest` for more details. 75 76QTest cases can be executed with 77 78.. code:: 79 80 make check-qtest 81 82QAPI schema tests 83~~~~~~~~~~~~~~~~~ 84 85The QAPI schema tests validate the QAPI parser used by QMP, by feeding 86predefined input to the parser and comparing the result with the reference 87output. 88 89The input/output data is managed under the ``tests/qapi-schema`` directory. 90Each test case includes four files that have a common base name: 91 92 * ``${casename}.json`` - the file contains the JSON input for feeding the 93 parser 94 * ``${casename}.out`` - the file contains the expected stdout from the parser 95 * ``${casename}.err`` - the file contains the expected stderr from the parser 96 * ``${casename}.exit`` - the expected error code 97 98Consider adding a new QAPI schema test when you are making a change on the QAPI 99parser (either fixing a bug or extending/modifying the syntax). To do this: 100 1011. Add four files for the new case as explained above. For example: 102 103 ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``. 104 1052. Add the new test in ``tests/Makefile.include``. For example: 106 107 ``qapi-schema += foo.json`` 108 109check-block 110~~~~~~~~~~~ 111 112``make check-block`` runs a subset of the block layer iotests (the tests that 113are in the "auto" group). 114See the "QEMU iotests" section below for more information. 115 116QEMU iotests 117------------ 118 119QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing 120framework widely used to test block layer related features. It is higher level 121than "make check" tests and 99% of the code is written in bash or Python 122scripts. The testing success criteria is golden output comparison, and the 123test files are named with numbers. 124 125To run iotests, make sure QEMU is built successfully, then switch to the 126``tests/qemu-iotests`` directory under the build directory, and run ``./check`` 127with desired arguments from there. 128 129By default, "raw" format and "file" protocol is used; all tests will be 130executed, except the unsupported ones. You can override the format and protocol 131with arguments: 132 133.. code:: 134 135 # test with qcow2 format 136 ./check -qcow2 137 # or test a different protocol 138 ./check -nbd 139 140It's also possible to list test numbers explicitly: 141 142.. code:: 143 144 # run selected cases with qcow2 format 145 ./check -qcow2 001 030 153 146 147Cache mode can be selected with the "-c" option, which may help reveal bugs 148that are specific to certain cache mode. 149 150More options are supported by the ``./check`` script, run ``./check -h`` for 151help. 152 153Writing a new test case 154~~~~~~~~~~~~~~~~~~~~~~~ 155 156Consider writing a tests case when you are making any changes to the block 157layer. An iotest case is usually the choice for that. There are already many 158test cases, so it is possible that extending one of them may achieve the goal 159and save the boilerplate to create one. (Unfortunately, there isn't a 100% 160reliable way to find a related one out of hundreds of tests. One approach is 161using ``git grep``.) 162 163Usually an iotest case consists of two files. One is an executable that 164produces output to stdout and stderr, the other is the expected reference 165output. They are given the same number in file names. E.g. Test script ``055`` 166and reference output ``055.out``. 167 168In rare cases, when outputs differ between cache mode ``none`` and others, a 169``.out.nocache`` file is added. In other cases, when outputs differ between 170image formats, more than one ``.out`` files are created ending with the 171respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``. 172 173There isn't a hard rule about how to write a test script, but a new test is 174usually a (copy and) modification of an existing case. There are a few 175commonly used ways to create a test: 176 177* A Bash script. It will make use of several environmental variables related 178 to the testing procedure, and could source a group of ``common.*`` libraries 179 for some common helper routines. 180 181* A Python unittest script. Import ``iotests`` and create a subclass of 182 ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of 183 this approach is that the output is too scarce, and the script is considered 184 harder to debug. 185 186* A simple Python script without using unittest module. This could also import 187 ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit 188 from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest 189 execution. This is a combination of 1 and 2. 190 191Pick the language per your preference since both Bash and Python have 192comparable library support for invoking and interacting with QEMU programs. If 193you opt for Python, it is strongly recommended to write Python 3 compatible 194code. 195 196Both Python and Bash frameworks in iotests provide helpers to manage test 197images. They can be used to create and clean up images under the test 198directory. If no I/O or any protocol specific feature is needed, it is often 199more convenient to use the pseudo block driver, ``null-co://``, as the test 200image, which doesn't require image creation or cleaning up. Avoid system-wide 201devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``. 202Otherwise, image locking implications have to be considered. For example, 203another application on the host may have locked the file, possibly leading to a 204test failure. If using such devices are explicitly desired, consider adding 205``locking=off`` option to disable image locking. 206 207Debugging a test case 208~~~~~~~~~~~~~~~~~~~~~ 209 210The following options to the ``check`` script can be useful when debugging 211a failing test: 212 213* ``-gdb`` wraps every QEMU invocation in a ``gdbserver``, which waits for a 214 connection from a gdb client. The options given to ``gdbserver`` (e.g. the 215 address on which to listen for connections) are taken from the ``$GDB_OPTIONS`` 216 environment variable. By default (if ``$GDB_OPTIONS`` is empty), it listens on 217 ``localhost:12345``. 218 It is possible to connect to it for example with 219 ``gdb -iex "target remote $addr"``, where ``$addr`` is the address 220 ``gdbserver`` listens on. 221 If the ``-gdb`` option is not used, ``$GDB_OPTIONS`` is ignored, 222 regardless of whether it is set or not. 223 224* ``-valgrind`` attaches a valgrind instance to QEMU. If it detects 225 warnings, it will print and save the log in 226 ``$TEST_DIR/<valgrind_pid>.valgrind``. 227 The final command line will be ``valgrind --log-file=$TEST_DIR/ 228 <valgrind_pid>.valgrind --error-exitcode=99 $QEMU ...`` 229 230* ``-d`` (debug) just increases the logging verbosity, showing 231 for example the QMP commands and answers. 232 233* ``-p`` (print) redirects QEMU’s stdout and stderr to the test output, 234 instead of saving it into a log file in 235 ``$TEST_DIR/qemu-machine-<random_string>``. 236 237Test case groups 238~~~~~~~~~~~~~~~~ 239 240"Tests may belong to one or more test groups, which are defined in the form 241of a comment in the test source file. By convention, test groups are listed 242in the second line of the test file, after the "#!/..." line, like this: 243 244.. code:: 245 246 #!/usr/bin/env python3 247 # group: auto quick 248 # 249 ... 250 251Another way of defining groups is creating the tests/qemu-iotests/group.local 252file. This should be used only for downstream (this file should never appear 253in upstream). This file may be used for defining some downstream test groups 254or for temporarily disabling tests, like this: 255 256.. code:: 257 258 # groups for some company downstream process 259 # 260 # ci - tests to run on build 261 # down - our downstream tests, not for upstream 262 # 263 # Format of each line is: 264 # TEST_NAME TEST_GROUP [TEST_GROUP ]... 265 266 013 ci 267 210 disabled 268 215 disabled 269 our-ugly-workaround-test down ci 270 271Note that the following group names have a special meaning: 272 273- quick: Tests in this group should finish within a few seconds. 274 275- auto: Tests in this group are used during "make check" and should be 276 runnable in any case. That means they should run with every QEMU binary 277 (also non-x86), with every QEMU configuration (i.e. must not fail if 278 an optional feature is not compiled in - but reporting a "skip" is ok), 279 work at least with the qcow2 file format, work with all kind of host 280 filesystems and users (e.g. "nobody" or "root") and must not take too 281 much memory and disk space (since CI pipelines tend to fail otherwise). 282 283- disabled: Tests in this group are disabled and ignored by check. 284 285.. _container-ref: 286 287Container based tests 288--------------------- 289 290Introduction 291~~~~~~~~~~~~ 292 293The container testing framework in QEMU utilizes public images to 294build and test QEMU in predefined and widely accessible Linux 295environments. This makes it possible to expand the test coverage 296across distros, toolchain flavors and library versions. The support 297was originally written for Docker although we also support Podman as 298an alternative container runtime. Although the many of the target 299names and scripts are prefixed with "docker" the system will 300automatically run on whichever is configured. 301 302The container images are also used to augment the generation of tests 303for testing TCG. See :ref:`checktcg-ref` for more details. 304 305Docker Prerequisites 306~~~~~~~~~~~~~~~~~~~~ 307 308Install "docker" with the system package manager and start the Docker service 309on your development machine, then make sure you have the privilege to run 310Docker commands. Typically it means setting up passwordless ``sudo docker`` 311command or login as root. For example: 312 313.. code:: 314 315 $ sudo yum install docker 316 $ # or `apt-get install docker` for Ubuntu, etc. 317 $ sudo systemctl start docker 318 $ sudo docker ps 319 320The last command should print an empty table, to verify the system is ready. 321 322An alternative method to set up permissions is by adding the current user to 323"docker" group and making the docker daemon socket file (by default 324``/var/run/docker.sock``) accessible to the group: 325 326.. code:: 327 328 $ sudo groupadd docker 329 $ sudo usermod $USER -a -G docker 330 $ sudo chown :docker /var/run/docker.sock 331 332Note that any one of above configurations makes it possible for the user to 333exploit the whole host with Docker bind mounting or other privileged 334operations. So only do it on development machines. 335 336Podman Prerequisites 337~~~~~~~~~~~~~~~~~~~~ 338 339Install "podman" with the system package manager. 340 341.. code:: 342 343 $ sudo dnf install podman 344 $ podman ps 345 346The last command should print an empty table, to verify the system is ready. 347 348Quickstart 349~~~~~~~~~~ 350 351From source tree, type ``make docker-help`` to see the help. Testing 352can be started without configuring or building QEMU (``configure`` and 353``make`` are done in the container, with parameters defined by the 354make target): 355 356.. code:: 357 358 make docker-test-build@centos8 359 360This will create a container instance using the ``centos8`` image (the image 361is downloaded and initialized automatically), in which the ``test-build`` job 362is executed. 363 364Registry 365~~~~~~~~ 366 367The QEMU project has a container registry hosted by GitLab at 368``registry.gitlab.com/qemu-project/qemu`` which will automatically be 369used to pull in pre-built layers. This avoids unnecessary strain on 370the distro archives created by multiple developers running the same 371container build steps over and over again. This can be overridden 372locally by using the ``NOCACHE`` build option: 373 374.. code:: 375 376 make docker-image-debian10 NOCACHE=1 377 378Images 379~~~~~~ 380 381Along with many other images, the ``centos8`` image is defined in a Dockerfile 382in ``tests/docker/dockerfiles/``, called ``centos8.docker``. ``make docker-help`` 383command will list all the available images. 384 385To add a new image, simply create a new ``.docker`` file under the 386``tests/docker/dockerfiles/`` directory. 387 388A ``.pre`` script can be added beside the ``.docker`` file, which will be 389executed before building the image under the build context directory. This is 390mainly used to do necessary host side setup. One such setup is ``binfmt_misc``, 391for example, to make qemu-user powered cross build containers work. 392 393Tests 394~~~~~ 395 396Different tests are added to cover various configurations to build and test 397QEMU. Docker tests are the executables under ``tests/docker`` named 398``test-*``. They are typically shell scripts and are built on top of a shell 399library, ``tests/docker/common.rc``, which provides helpers to find the QEMU 400source and build it. 401 402The full list of tests is printed in the ``make docker-help`` help. 403 404Debugging a Docker test failure 405~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 406 407When CI tasks, maintainers or yourself report a Docker test failure, follow the 408below steps to debug it: 409 4101. Locally reproduce the failure with the reported command line. E.g. run 411 ``make docker-test-mingw@fedora J=8``. 4122. Add "V=1" to the command line, try again, to see the verbose output. 4133. Further add "DEBUG=1" to the command line. This will pause in a shell prompt 414 in the container right before testing starts. You could either manually 415 build QEMU and run tests from there, or press Ctrl-D to let the Docker 416 testing continue. 4174. If you press Ctrl-D, the same building and testing procedure will begin, and 418 will hopefully run into the error again. After that, you will be dropped to 419 the prompt for debug. 420 421Options 422~~~~~~~ 423 424Various options can be used to affect how Docker tests are done. The full 425list is in the ``make docker`` help text. The frequently used ones are: 426 427* ``V=1``: the same as in top level ``make``. It will be propagated to the 428 container and enable verbose output. 429* ``J=$N``: the number of parallel tasks in make commands in the container, 430 similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in 431 top level ``make`` will not be propagated into the container.) 432* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test 433 failure" section. 434 435Thread Sanitizer 436---------------- 437 438Thread Sanitizer (TSan) is a tool which can detect data races. QEMU supports 439building and testing with this tool. 440 441For more information on TSan: 442 443https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual 444 445Thread Sanitizer in Docker 446~~~~~~~~~~~~~~~~~~~~~~~~~~ 447TSan is currently supported in the ubuntu2004 docker. 448 449The test-tsan test will build using TSan and then run make check. 450 451.. code:: 452 453 make docker-test-tsan@ubuntu2004 454 455TSan warnings under docker are placed in files located at build/tsan/. 456 457We recommend using DEBUG=1 to allow launching the test from inside the docker, 458and to allow review of the warnings generated by TSan. 459 460Building and Testing with TSan 461~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 462 463It is possible to build and test with TSan, with a few additional steps. 464These steps are normally done automatically in the docker. 465 466There is a one time patch needed in clang-9 or clang-10 at this time: 467 468.. code:: 469 470 sed -i 's/^const/static const/g' \ 471 /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h 472 473To configure the build for TSan: 474 475.. code:: 476 477 ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \ 478 --disable-werror --extra-cflags="-O0" 479 480The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment 481variable. 482 483More information on the TSAN_OPTIONS can be found here: 484 485https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags 486 487For example: 488 489.. code:: 490 491 export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \ 492 detect_deadlocks=false history_size=7 exitcode=0 \ 493 log_path=<build path>/tsan/tsan_warning 494 495The above exitcode=0 has TSan continue without error if any warnings are found. 496This allows for running the test and then checking the warnings afterwards. 497If you want TSan to stop and exit with error on warnings, use exitcode=66. 498 499TSan Suppressions 500~~~~~~~~~~~~~~~~~ 501Keep in mind that for any data race warning, although there might be a data race 502detected by TSan, there might be no actual bug here. TSan provides several 503different mechanisms for suppressing warnings. In general it is recommended 504to fix the code if possible to eliminate the data race rather than suppress 505the warning. 506 507A few important files for suppressing warnings are: 508 509tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime. 510The comment on each suppression will typically indicate why we are 511suppressing it. More information on the file format can be found here: 512 513https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions 514 515tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable 516at compile time for test or debug. 517Add flags to configure to enable: 518 519"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan" 520 521More information on the file format can be found here under "Blacklist Format": 522 523https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags 524 525TSan Annotations 526~~~~~~~~~~~~~~~~ 527include/qemu/tsan.h defines annotations. See this file for more descriptions 528of the annotations themselves. Annotations can be used to suppress 529TSan warnings or give TSan more information so that it can detect proper 530relationships between accesses of data. 531 532Annotation examples can be found here: 533 534https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/ 535 536Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp 537 538The full set of annotations can be found here: 539 540https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp 541 542VM testing 543---------- 544 545This test suite contains scripts that bootstrap various guest images that have 546necessary packages to build QEMU. The basic usage is documented in ``Makefile`` 547help which is displayed with ``make vm-help``. 548 549Quickstart 550~~~~~~~~~~ 551 552Run ``make vm-help`` to list available make targets. Invoke a specific make 553command to run build test in an image. For example, ``make vm-build-freebsd`` 554will build the source tree in the FreeBSD image. The command can be executed 555from either the source tree or the build dir; if the former, ``./configure`` is 556not needed. The command will then generate the test image in ``./tests/vm/`` 557under the working directory. 558 559Note: images created by the scripts accept a well-known RSA key pair for SSH 560access, so they SHOULD NOT be exposed to external interfaces if you are 561concerned about attackers taking control of the guest and potentially 562exploiting a QEMU security bug to compromise the host. 563 564QEMU binaries 565~~~~~~~~~~~~~ 566 567By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there 568isn't one, or if it is older than 2.10, the test won't work. In this case, 569provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``. 570 571Likewise the path to qemu-img can be set in QEMU_IMG environment variable. 572 573Make jobs 574~~~~~~~~~ 575 576The ``-j$X`` option in the make command line is not propagated into the VM, 577specify ``J=$X`` to control the make jobs in the guest. 578 579Debugging 580~~~~~~~~~ 581 582Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive 583debugging and verbose output. If this is not enough, see the next section. 584``V=1`` will be propagated down into the make jobs in the guest. 585 586Manual invocation 587~~~~~~~~~~~~~~~~~ 588 589Each guest script is an executable script with the same command line options. 590For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``: 591 592.. code:: 593 594 $ cd $QEMU_SRC/tests/vm 595 596 # To bootstrap the image 597 $ ./netbsd --build-image --image /var/tmp/netbsd.img 598 <...> 599 600 # To run an arbitrary command in guest (the output will not be echoed unless 601 # --debug is added) 602 $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a 603 604 # To build QEMU in guest 605 $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC 606 607 # To get to an interactive shell 608 $ ./netbsd --interactive --image /var/tmp/netbsd.img sh 609 610Adding new guests 611~~~~~~~~~~~~~~~~~ 612 613Please look at existing guest scripts for how to add new guests. 614 615Most importantly, create a subclass of BaseVM and implement ``build_image()`` 616method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from 617the script's ``main()``. 618 619* Usually in ``build_image()``, a template image is downloaded from a 620 predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and 621 the checksum, so consider using it. 622 623* Once the image is downloaded, users, SSH server and QEMU build deps should 624 be set up: 625 626 - Root password set to ``BaseVM.ROOT_PASS`` 627 - User ``BaseVM.GUEST_USER`` is created, and password set to 628 ``BaseVM.GUEST_PASS`` 629 - SSH service is enabled and started on boot, 630 ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys`` 631 file of both root and the normal user 632 - DHCP client service is enabled and started on boot, so that it can 633 automatically configure the virtio-net-pci NIC and communicate with QEMU 634 user net (10.0.2.2) 635 - Necessary packages are installed to untar the source tarball and build 636 QEMU 637 638* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that 639 untars a raw virtio-blk block device, which is the tarball data blob of the 640 QEMU source tree, then configure/build it. Running "make check" is also 641 recommended. 642 643Image fuzzer testing 644-------------------- 645 646An image fuzzer was added to exercise format drivers. Currently only qcow2 is 647supported. To start the fuzzer, run 648 649.. code:: 650 651 tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2 652 653Alternatively, some command different from "qemu-img info" can be tested, by 654changing the ``-c`` option. 655 656Acceptance tests using the Avocado Framework 657-------------------------------------------- 658 659The ``tests/acceptance`` directory hosts functional tests, also known 660as acceptance level tests. They're usually higher level tests, and 661may interact with external resources and with various guest operating 662systems. 663 664These tests are written using the Avocado Testing Framework (which must 665be installed separately) in conjunction with a the ``avocado_qemu.Test`` 666class, implemented at ``tests/acceptance/avocado_qemu``. 667 668Tests based on ``avocado_qemu.Test`` can easily: 669 670 * Customize the command line arguments given to the convenience 671 ``self.vm`` attribute (a QEMUMachine instance) 672 673 * Interact with the QEMU monitor, send QMP commands and check 674 their results 675 676 * Interact with the guest OS, using the convenience console device 677 (which may be useful to assert the effectiveness and correctness of 678 command line arguments or QMP commands) 679 680 * Interact with external data files that accompany the test itself 681 (see ``self.get_data()``) 682 683 * Download (and cache) remote data files, such as firmware and kernel 684 images 685 686 * Have access to a library of guest OS images (by means of the 687 ``avocado.utils.vmimage`` library) 688 689 * Make use of various other test related utilities available at the 690 test class itself and at the utility library: 691 692 - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test 693 - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html 694 695Running tests 696~~~~~~~~~~~~~ 697 698You can run the acceptance tests simply by executing: 699 700.. code:: 701 702 make check-acceptance 703 704This involves the automatic creation of Python virtual environment 705within the build tree (at ``tests/venv``) which will have all the 706right dependencies, and will save tests results also within the 707build tree (at ``tests/results``). 708 709Note: the build environment must be using a Python 3 stack, and have 710the ``venv`` and ``pip`` packages installed. If necessary, make sure 711``configure`` is called with ``--python=`` and that those modules are 712available. On Debian and Ubuntu based systems, depending on the 713specific version, they may be on packages named ``python3-venv`` and 714``python3-pip``. 715 716It is also possible to run tests based on tags using the 717``make check-acceptance`` command and the ``AVOCADO_TAGS`` environment 718variable: 719 720.. code:: 721 722 make check-acceptance AVOCADO_TAGS=quick 723 724Note that tags separated with commas have an AND behavior, while tags 725separated by spaces have an OR behavior. For more information on Avocado 726tags, see: 727 728 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/tags.html 729 730To run a single test file, a couple of them, or a test within a file 731using the ``make check-acceptance`` command, set the ``AVOCADO_TESTS`` 732environment variable with the test files or test names. To run all 733tests from a single file, use: 734 735 .. code:: 736 737 make check-acceptance AVOCADO_TESTS=$FILEPATH 738 739The same is valid to run tests from multiple test files: 740 741 .. code:: 742 743 make check-acceptance AVOCADO_TESTS='$FILEPATH1 $FILEPATH2' 744 745To run a single test within a file, use: 746 747 .. code:: 748 749 make check-acceptance AVOCADO_TESTS=$FILEPATH:$TESTCLASS.$TESTNAME 750 751The same is valid to run single tests from multiple test files: 752 753 .. code:: 754 755 make check-acceptance AVOCADO_TESTS='$FILEPATH1:$TESTCLASS1.$TESTNAME1 $FILEPATH2:$TESTCLASS2.$TESTNAME2' 756 757The scripts installed inside the virtual environment may be used 758without an "activation". For instance, the Avocado test runner 759may be invoked by running: 760 761 .. code:: 762 763 tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/ 764 765Note that if ``make check-acceptance`` was not executed before, it is 766possible to create the Python virtual environment with the dependencies 767needed running: 768 769 .. code:: 770 771 make check-venv 772 773It is also possible to run tests from a single file or a single test within 774a test file. To run tests from a single file within the build tree, use: 775 776 .. code:: 777 778 tests/venv/bin/avocado run tests/acceptance/$TESTFILE 779 780To run a single test within a test file, use: 781 782 .. code:: 783 784 tests/venv/bin/avocado run tests/acceptance/$TESTFILE:$TESTCLASS.$TESTNAME 785 786Valid test names are visible in the output from any previous execution 787of Avocado or ``make check-acceptance``, and can also be queried using: 788 789 .. code:: 790 791 tests/venv/bin/avocado list tests/acceptance 792 793Manual Installation 794~~~~~~~~~~~~~~~~~~~ 795 796To manually install Avocado and its dependencies, run: 797 798.. code:: 799 800 pip install --user avocado-framework 801 802Alternatively, follow the instructions on this link: 803 804 https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html 805 806Overview 807~~~~~~~~ 808 809The ``tests/acceptance/avocado_qemu`` directory provides the 810``avocado_qemu`` Python module, containing the ``avocado_qemu.Test`` 811class. Here's a simple usage example: 812 813.. code:: 814 815 from avocado_qemu import Test 816 817 818 class Version(Test): 819 """ 820 :avocado: tags=quick 821 """ 822 def test_qmp_human_info_version(self): 823 self.vm.launch() 824 res = self.vm.command('human-monitor-command', 825 command_line='info version') 826 self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)') 827 828To execute your test, run: 829 830.. code:: 831 832 avocado run version.py 833 834Tests may be classified according to a convention by using docstring 835directives such as ``:avocado: tags=TAG1,TAG2``. To run all tests 836in the current directory, tagged as "quick", run: 837 838.. code:: 839 840 avocado run -t quick . 841 842The ``avocado_qemu.Test`` base test class 843^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 844 845The ``avocado_qemu.Test`` class has a number of characteristics that 846are worth being mentioned right away. 847 848First of all, it attempts to give each test a ready to use QEMUMachine 849instance, available at ``self.vm``. Because many tests will tweak the 850QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``) 851is left to the test writer. 852 853The base test class has also support for tests with more than one 854QEMUMachine. The way to get machines is through the ``self.get_vm()`` 855method which will return a QEMUMachine instance. The ``self.get_vm()`` 856method accepts arguments that will be passed to the QEMUMachine creation 857and also an optional ``name`` attribute so you can identify a specific 858machine and get it more than once through the tests methods. A simple 859and hypothetical example follows: 860 861.. code:: 862 863 from avocado_qemu import Test 864 865 866 class MultipleMachines(Test): 867 def test_multiple_machines(self): 868 first_machine = self.get_vm() 869 second_machine = self.get_vm() 870 self.get_vm(name='third_machine').launch() 871 872 first_machine.launch() 873 second_machine.launch() 874 875 first_res = first_machine.command( 876 'human-monitor-command', 877 command_line='info version') 878 879 second_res = second_machine.command( 880 'human-monitor-command', 881 command_line='info version') 882 883 third_res = self.get_vm(name='third_machine').command( 884 'human-monitor-command', 885 command_line='info version') 886 887 self.assertEquals(first_res, second_res, third_res) 888 889At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines 890shutdown. 891 892The ``avocado_qemu.LinuxTest`` base test class 893^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 894 895The ``avocado_qemu.LinuxTest`` is further specialization of the 896``avocado_qemu.Test`` class, so it contains all the characteristics of 897the later plus some extra features. 898 899First of all, this base class is intended for tests that need to 900interact with a fully booted and operational Linux guest. At this 901time, it uses a Fedora 31 guest image. The most basic example looks 902like this: 903 904.. code:: 905 906 from avocado_qemu import LinuxTest 907 908 909 class SomeTest(LinuxTest): 910 911 def test(self): 912 self.launch_and_wait() 913 self.ssh_command('some_command_to_be_run_in_the_guest') 914 915Please refer to tests that use ``avocado_qemu.LinuxTest`` under 916``tests/acceptance`` for more examples. 917 918QEMUMachine 919~~~~~~~~~~~ 920 921The QEMUMachine API is already widely used in the Python iotests, 922device-crash-test and other Python scripts. It's a wrapper around the 923execution of a QEMU binary, giving its users: 924 925 * the ability to set command line arguments to be given to the QEMU 926 binary 927 928 * a ready to use QMP connection and interface, which can be used to 929 send commands and inspect its results, as well as asynchronous 930 events 931 932 * convenience methods to set commonly used command line arguments in 933 a more succinct and intuitive way 934 935QEMU binary selection 936^^^^^^^^^^^^^^^^^^^^^ 937 938The QEMU binary used for the ``self.vm`` QEMUMachine instance will 939primarily depend on the value of the ``qemu_bin`` parameter. If it's 940not explicitly set, its default value will be the result of a dynamic 941probe in the same source tree. A suitable binary will be one that 942targets the architecture matching host machine. 943 944Based on this description, test writers will usually rely on one of 945the following approaches: 946 9471) Set ``qemu_bin``, and use the given binary 948 9492) Do not set ``qemu_bin``, and use a QEMU binary named like 950 "qemu-system-${arch}", either in the current 951 working directory, or in the current source tree. 952 953The resulting ``qemu_bin`` value will be preserved in the 954``avocado_qemu.Test`` as an attribute with the same name. 955 956Attribute reference 957~~~~~~~~~~~~~~~~~~~ 958 959Test 960^^^^ 961 962Besides the attributes and methods that are part of the base 963``avocado.Test`` class, the following attributes are available on any 964``avocado_qemu.Test`` instance. 965 966vm 967'' 968 969A QEMUMachine instance, initially configured according to the given 970``qemu_bin`` parameter. 971 972arch 973'''' 974 975The architecture can be used on different levels of the stack, e.g. by 976the framework or by the test itself. At the framework level, it will 977currently influence the selection of a QEMU binary (when one is not 978explicitly given). 979 980Tests are also free to use this attribute value, for their own needs. 981A test may, for instance, use the same value when selecting the 982architecture of a kernel or disk image to boot a VM with. 983 984The ``arch`` attribute will be set to the test parameter of the same 985name. If one is not given explicitly, it will either be set to 986``None``, or, if the test is tagged with one (and only one) 987``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``. 988 989cpu 990''' 991 992The cpu model that will be set to all QEMUMachine instances created 993by the test. 994 995The ``cpu`` attribute will be set to the test parameter of the same 996name. If one is not given explicitly, it will either be set to 997``None ``, or, if the test is tagged with one (and only one) 998``:avocado: tags=cpu:VALUE`` tag, it will be set to ``VALUE``. 999 1000machine 1001''''''' 1002 1003The machine type that will be set to all QEMUMachine instances created 1004by the test. 1005 1006The ``machine`` attribute will be set to the test parameter of the same 1007name. If one is not given explicitly, it will either be set to 1008``None``, or, if the test is tagged with one (and only one) 1009``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``. 1010 1011qemu_bin 1012'''''''' 1013 1014The preserved value of the ``qemu_bin`` parameter or the result of the 1015dynamic probe for a QEMU binary in the current working directory or 1016source tree. 1017 1018LinuxTest 1019^^^^^^^^^ 1020 1021Besides the attributes present on the ``avocado_qemu.Test`` base 1022class, the ``avocado_qemu.LinuxTest`` adds the following attributes: 1023 1024distro 1025'''''' 1026 1027The name of the Linux distribution used as the guest image for the 1028test. The name should match the **Provider** column on the list 1029of images supported by the avocado.utils.vmimage library: 1030 1031https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images 1032 1033distro_version 1034'''''''''''''' 1035 1036The version of the Linux distribution as the guest image for the 1037test. The name should match the **Version** column on the list 1038of images supported by the avocado.utils.vmimage library: 1039 1040https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images 1041 1042distro_checksum 1043''''''''''''''' 1044 1045The sha256 hash of the guest image file used for the test. 1046 1047If this value is not set in the code or by a test parameter (with the 1048same name), no validation on the integrity of the image will be 1049performed. 1050 1051Parameter reference 1052~~~~~~~~~~~~~~~~~~~ 1053 1054To understand how Avocado parameters are accessed by tests, and how 1055they can be passed to tests, please refer to:: 1056 1057 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters 1058 1059Parameter values can be easily seen in the log files, and will look 1060like the following: 1061 1062.. code:: 1063 1064 PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64 1065 1066Test 1067^^^^ 1068 1069arch 1070'''' 1071 1072The architecture that will influence the selection of a QEMU binary 1073(when one is not explicitly given). 1074 1075Tests are also free to use this parameter value, for their own needs. 1076A test may, for instance, use the same value when selecting the 1077architecture of a kernel or disk image to boot a VM with. 1078 1079This parameter has a direct relation with the ``arch`` attribute. If 1080not given, it will default to None. 1081 1082cpu 1083''' 1084 1085The cpu model that will be set to all QEMUMachine instances created 1086by the test. 1087 1088machine 1089''''''' 1090 1091The machine type that will be set to all QEMUMachine instances created 1092by the test. 1093 1094qemu_bin 1095'''''''' 1096 1097The exact QEMU binary to be used on QEMUMachine. 1098 1099LinuxTest 1100^^^^^^^^^ 1101 1102Besides the parameters present on the ``avocado_qemu.Test`` base 1103class, the ``avocado_qemu.LinuxTest`` adds the following parameters: 1104 1105distro 1106'''''' 1107 1108The name of the Linux distribution used as the guest image for the 1109test. The name should match the **Provider** column on the list 1110of images supported by the avocado.utils.vmimage library: 1111 1112https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images 1113 1114distro_version 1115'''''''''''''' 1116 1117The version of the Linux distribution as the guest image for the 1118test. The name should match the **Version** column on the list 1119of images supported by the avocado.utils.vmimage library: 1120 1121https://avocado-framework.readthedocs.io/en/latest/guides/writer/libs/vmimage.html#supported-images 1122 1123distro_checksum 1124''''''''''''''' 1125 1126The sha256 hash of the guest image file used for the test. 1127 1128If this value is not set in the code or by this parameter no 1129validation on the integrity of the image will be performed. 1130 1131Skipping tests 1132~~~~~~~~~~~~~~ 1133 1134The Avocado framework provides Python decorators which allow for easily skip 1135tests running under certain conditions. For example, on the lack of a binary 1136on the test system or when the running environment is a CI system. For further 1137information about those decorators, please refer to:: 1138 1139 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests 1140 1141While the conditions for skipping tests are often specifics of each one, there 1142are recurring scenarios identified by the QEMU developers and the use of 1143environment variables became a kind of standard way to enable/disable tests. 1144 1145Here is a list of the most used variables: 1146 1147AVOCADO_ALLOW_LARGE_STORAGE 1148^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1149Tests which are going to fetch or produce assets considered *large* are not 1150going to run unless that ``AVOCADO_ALLOW_LARGE_STORAGE=1`` is exported on 1151the environment. 1152 1153The definition of *large* is a bit arbitrary here, but it usually means an 1154asset which occupies at least 1GB of size on disk when uncompressed. 1155 1156AVOCADO_ALLOW_UNTRUSTED_CODE 1157^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1158There are tests which will boot a kernel image or firmware that can be 1159considered not safe to run on the developer's workstation, thus they are 1160skipped by default. The definition of *not safe* is also arbitrary but 1161usually it means a blob which either its source or build process aren't 1162public available. 1163 1164You should export ``AVOCADO_ALLOW_UNTRUSTED_CODE=1`` on the environment in 1165order to allow tests which make use of those kind of assets. 1166 1167AVOCADO_TIMEOUT_EXPECTED 1168^^^^^^^^^^^^^^^^^^^^^^^^ 1169The Avocado framework has a timeout mechanism which interrupts tests to avoid the 1170test suite of getting stuck. The timeout value can be set via test parameter or 1171property defined in the test class, for further details:: 1172 1173 https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout 1174 1175Even though the timeout can be set by the test developer, there are some tests 1176that may not have a well-defined limit of time to finish under certain 1177conditions. For example, tests that take longer to execute when QEMU is 1178compiled with debug flags. Therefore, the ``AVOCADO_TIMEOUT_EXPECTED`` variable 1179has been used to determine whether those tests should run or not. 1180 1181GITLAB_CI 1182^^^^^^^^^ 1183A number of tests are flagged to not run on the GitLab CI. Usually because 1184they proved to the flaky or there are constraints on the CI environment which 1185would make them fail. If you encounter a similar situation then use that 1186variable as shown on the code snippet below to skip the test: 1187 1188.. code:: 1189 1190 @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab') 1191 def test(self): 1192 do_something() 1193 1194Uninstalling Avocado 1195~~~~~~~~~~~~~~~~~~~~ 1196 1197If you've followed the manual installation instructions above, you can 1198easily uninstall Avocado. Start by listing the packages you have 1199installed:: 1200 1201 pip list --user 1202 1203And remove any package you want with:: 1204 1205 pip uninstall <package_name> 1206 1207If you've used ``make check-acceptance``, the Python virtual environment where 1208Avocado is installed will be cleaned up as part of ``make check-clean``. 1209 1210.. _checktcg-ref: 1211 1212Testing with "make check-tcg" 1213----------------------------- 1214 1215The check-tcg tests are intended for simple smoke tests of both 1216linux-user and softmmu TCG functionality. However to build test 1217programs for guest targets you need to have cross compilers available. 1218If your distribution supports cross compilers you can do something as 1219simple as:: 1220 1221 apt install gcc-aarch64-linux-gnu 1222 1223The configure script will automatically pick up their presence. 1224Sometimes compilers have slightly odd names so the availability of 1225them can be prompted by passing in the appropriate configure option 1226for the architecture in question, for example:: 1227 1228 $(configure) --cross-cc-aarch64=aarch64-cc 1229 1230There is also a ``--cross-cc-flags-ARCH`` flag in case additional 1231compiler flags are needed to build for a given target. 1232 1233If you have the ability to run containers as the user the build system 1234will automatically use them where no system compiler is available. For 1235architectures where we also support building QEMU we will generally 1236use the same container to build tests. However there are a number of 1237additional containers defined that have a minimal cross-build 1238environment that is only suitable for building test cases. Sometimes 1239we may use a bleeding edge distribution for compiler features needed 1240for test cases that aren't yet in the LTS distros we support for QEMU 1241itself. 1242 1243See :ref:`container-ref` for more details. 1244 1245Running subset of tests 1246~~~~~~~~~~~~~~~~~~~~~~~ 1247 1248You can build the tests for one architecture:: 1249 1250 make build-tcg-tests-$TARGET 1251 1252And run with:: 1253 1254 make run-tcg-tests-$TARGET 1255 1256Adding ``V=1`` to the invocation will show the details of how to 1257invoke QEMU for the test which is useful for debugging tests. 1258 1259TCG test dependencies 1260~~~~~~~~~~~~~~~~~~~~~ 1261 1262The TCG tests are deliberately very light on dependencies and are 1263either totally bare with minimal gcc lib support (for softmmu tests) 1264or just glibc (for linux-user tests). This is because getting a cross 1265compiler to work with additional libraries can be challenging. 1266 1267Other TCG Tests 1268--------------- 1269 1270There are a number of out-of-tree test suites that are used for more 1271extensive testing of processor features. 1272 1273KVM Unit Tests 1274~~~~~~~~~~~~~~ 1275 1276The KVM unit tests are designed to run as a Guest OS under KVM but 1277there is no reason why they can't exercise the TCG as well. It 1278provides a minimal OS kernel with hooks for enabling the MMU as well 1279as reporting test results via a special device:: 1280 1281 https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git 1282 1283Linux Test Project 1284~~~~~~~~~~~~~~~~~~ 1285 1286The LTP is focused on exercising the syscall interface of a Linux 1287kernel. It checks that syscalls behave as documented and strives to 1288exercise as many corner cases as possible. It is a useful test suite 1289to run to exercise QEMU's linux-user code:: 1290 1291 https://linux-test-project.github.io/ 1292 1293GCC gcov support 1294---------------- 1295 1296``gcov`` is a GCC tool to analyze the testing coverage by 1297instrumenting the tested code. To use it, configure QEMU with 1298``--enable-gcov`` option and build. Then run the tests as usual. 1299 1300If you want to gather coverage information on a single test the ``make 1301clean-gcda`` target can be used to delete any existing coverage 1302information before running a single test. 1303 1304You can generate a HTML coverage report by executing ``make 1305coverage-html`` which will create 1306``meson-logs/coveragereport/index.html``. 1307 1308Further analysis can be conducted by running the ``gcov`` command 1309directly on the various .gcda output files. Please read the ``gcov`` 1310documentation for more information. 1311