1=============== 2Testing in QEMU 3=============== 4 5This document describes the testing infrastructure in QEMU. 6 7Testing with "make check" 8========================= 9 10The "make check" testing family includes most of the C based tests in QEMU. For 11a quick help, run ``make check-help`` from the source tree. 12 13The usual way to run these tests is: 14 15.. code:: 16 17 make check 18 19which includes QAPI schema tests, unit tests, QTests and some iotests. 20Different sub-types of "make check" tests will be explained below. 21 22Before running tests, it is best to build QEMU programs first. Some tests 23expect the executables to exist and will fail with obscure messages if they 24cannot find them. 25 26Unit tests 27---------- 28 29Unit tests, which can be invoked with ``make check-unit``, are simple C tests 30that typically link to individual QEMU object files and exercise them by 31calling exported functions. 32 33If you are writing new code in QEMU, consider adding a unit test, especially 34for utility modules that are relatively stateless or have few dependencies. To 35add a new unit test: 36 371. Create a new source file. For example, ``tests/foo-test.c``. 38 392. Write the test. Normally you would include the header file which exports 40 the module API, then verify the interface behaves as expected from your 41 test. The test code should be organized with the glib testing framework. 42 Copying and modifying an existing test is usually a good idea. 43 443. Add the test to ``tests/Makefile.include``. First, name the unit test 45 program and add it to ``$(check-unit-y)``; then add a rule to build the 46 executable. For example: 47 48.. code:: 49 50 check-unit-y += tests/foo-test$(EXESUF) 51 tests/foo-test$(EXESUF): tests/foo-test.o $(test-util-obj-y) 52 ... 53 54Since unit tests don't require environment variables, the simplest way to debug 55a unit test failure is often directly invoking it or even running it under 56``gdb``. However there can still be differences in behavior between ``make`` 57invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment 58variable (which affects memory reclamation and catches invalid pointers better) 59and gtester options. If necessary, you can run 60 61.. code:: 62 63 make check-unit V=1 64 65and copy the actual command line which executes the unit test, then run 66it from the command line. 67 68QTest 69----- 70 71QTest is a device emulation testing framework. It can be very useful to test 72device models; it could also control certain aspects of QEMU (such as virtual 73clock stepping), with a special purpose "qtest" protocol. Refer to the 74documentation in ``qtest.c`` for more details of the protocol. 75 76QTest cases can be executed with 77 78.. code:: 79 80 make check-qtest 81 82The QTest library is implemented by ``tests/qtest/libqtest.c`` and the API is 83defined in ``tests/qtest/libqtest.h``. 84 85Consider adding a new QTest case when you are introducing a new virtual 86hardware, or extending one if you are adding functionalities to an existing 87virtual device. 88 89On top of libqtest, a higher level library, ``libqos``, was created to 90encapsulate common tasks of device drivers, such as memory management and 91communicating with system buses or devices. Many virtual device tests use 92libqos instead of directly calling into libqtest. 93 94Steps to add a new QTest case are: 95 961. Create a new source file for the test. (More than one file can be added as 97 necessary.) For example, ``tests/qtest/foo-test.c``. 98 992. Write the test code with the glib and libqtest/libqos API. See also existing 100 tests and the library headers for reference. 101 1023. Register the new test in ``tests/qtest/Makefile.include``. Add the test 103 executable name to an appropriate ``check-qtest-*-y`` variable. For example: 104 105 ``check-qtest-generic-y = tests/qtest/foo-test$(EXESUF)`` 106 1074. Add object dependencies of the executable in the Makefile, including the 108 test source file(s) and other interesting objects. For example: 109 110 ``tests/qtest/foo-test$(EXESUF): tests/qtest/foo-test.o $(libqos-obj-y)`` 111 112Debugging a QTest failure is slightly harder than the unit test because the 113tests look up QEMU program names in the environment variables, such as 114``QTEST_QEMU_BINARY`` and ``QTEST_QEMU_IMG``, and also because it is not easy 115to attach gdb to the QEMU process spawned from the test. But manual invoking 116and using gdb on the test is still simple to do: find out the actual command 117from the output of 118 119.. code:: 120 121 make check-qtest V=1 122 123which you can run manually. 124 125QAPI schema tests 126----------------- 127 128The QAPI schema tests validate the QAPI parser used by QMP, by feeding 129predefined input to the parser and comparing the result with the reference 130output. 131 132The input/output data is managed under the ``tests/qapi-schema`` directory. 133Each test case includes four files that have a common base name: 134 135 * ``${casename}.json`` - the file contains the JSON input for feeding the 136 parser 137 * ``${casename}.out`` - the file contains the expected stdout from the parser 138 * ``${casename}.err`` - the file contains the expected stderr from the parser 139 * ``${casename}.exit`` - the expected error code 140 141Consider adding a new QAPI schema test when you are making a change on the QAPI 142parser (either fixing a bug or extending/modifying the syntax). To do this: 143 1441. Add four files for the new case as explained above. For example: 145 146 ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``. 147 1482. Add the new test in ``tests/Makefile.include``. For example: 149 150 ``qapi-schema += foo.json`` 151 152check-block 153----------- 154 155``make check-block`` runs a subset of the block layer iotests (the tests that 156are in the "auto" group in ``tests/qemu-iotests/group``). 157See the "QEMU iotests" section below for more information. 158 159GCC gcov support 160---------------- 161 162``gcov`` is a GCC tool to analyze the testing coverage by 163instrumenting the tested code. To use it, configure QEMU with 164``--enable-gcov`` option and build. Then run ``make check`` as usual. 165 166If you want to gather coverage information on a single test the ``make 167clean-coverage`` target can be used to delete any existing coverage 168information before running a single test. 169 170You can generate a HTML coverage report by executing ``make 171coverage-report`` which will create 172./reports/coverage/coverage-report.html. If you want to create it 173elsewhere simply execute ``make /foo/bar/baz/coverage-report.html``. 174 175Further analysis can be conducted by running the ``gcov`` command 176directly on the various .gcda output files. Please read the ``gcov`` 177documentation for more information. 178 179QEMU iotests 180============ 181 182QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing 183framework widely used to test block layer related features. It is higher level 184than "make check" tests and 99% of the code is written in bash or Python 185scripts. The testing success criteria is golden output comparison, and the 186test files are named with numbers. 187 188To run iotests, make sure QEMU is built successfully, then switch to the 189``tests/qemu-iotests`` directory under the build directory, and run ``./check`` 190with desired arguments from there. 191 192By default, "raw" format and "file" protocol is used; all tests will be 193executed, except the unsupported ones. You can override the format and protocol 194with arguments: 195 196.. code:: 197 198 # test with qcow2 format 199 ./check -qcow2 200 # or test a different protocol 201 ./check -nbd 202 203It's also possible to list test numbers explicitly: 204 205.. code:: 206 207 # run selected cases with qcow2 format 208 ./check -qcow2 001 030 153 209 210Cache mode can be selected with the "-c" option, which may help reveal bugs 211that are specific to certain cache mode. 212 213More options are supported by the ``./check`` script, run ``./check -h`` for 214help. 215 216Writing a new test case 217----------------------- 218 219Consider writing a tests case when you are making any changes to the block 220layer. An iotest case is usually the choice for that. There are already many 221test cases, so it is possible that extending one of them may achieve the goal 222and save the boilerplate to create one. (Unfortunately, there isn't a 100% 223reliable way to find a related one out of hundreds of tests. One approach is 224using ``git grep``.) 225 226Usually an iotest case consists of two files. One is an executable that 227produces output to stdout and stderr, the other is the expected reference 228output. They are given the same number in file names. E.g. Test script ``055`` 229and reference output ``055.out``. 230 231In rare cases, when outputs differ between cache mode ``none`` and others, a 232``.out.nocache`` file is added. In other cases, when outputs differ between 233image formats, more than one ``.out`` files are created ending with the 234respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``. 235 236There isn't a hard rule about how to write a test script, but a new test is 237usually a (copy and) modification of an existing case. There are a few 238commonly used ways to create a test: 239 240* A Bash script. It will make use of several environmental variables related 241 to the testing procedure, and could source a group of ``common.*`` libraries 242 for some common helper routines. 243 244* A Python unittest script. Import ``iotests`` and create a subclass of 245 ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of 246 this approach is that the output is too scarce, and the script is considered 247 harder to debug. 248 249* A simple Python script without using unittest module. This could also import 250 ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit 251 from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest 252 execution. This is a combination of 1 and 2. 253 254Pick the language per your preference since both Bash and Python have 255comparable library support for invoking and interacting with QEMU programs. If 256you opt for Python, it is strongly recommended to write Python 3 compatible 257code. 258 259Both Python and Bash frameworks in iotests provide helpers to manage test 260images. They can be used to create and clean up images under the test 261directory. If no I/O or any protocol specific feature is needed, it is often 262more convenient to use the pseudo block driver, ``null-co://``, as the test 263image, which doesn't require image creation or cleaning up. Avoid system-wide 264devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``. 265Otherwise, image locking implications have to be considered. For example, 266another application on the host may have locked the file, possibly leading to a 267test failure. If using such devices are explicitly desired, consider adding 268``locking=off`` option to disable image locking. 269 270.. _docker-ref: 271 272Docker based tests 273================== 274 275Introduction 276------------ 277 278The Docker testing framework in QEMU utilizes public Docker images to build and 279test QEMU in predefined and widely accessible Linux environments. This makes 280it possible to expand the test coverage across distros, toolchain flavors and 281library versions. 282 283Prerequisites 284------------- 285 286Install "docker" with the system package manager and start the Docker service 287on your development machine, then make sure you have the privilege to run 288Docker commands. Typically it means setting up passwordless ``sudo docker`` 289command or login as root. For example: 290 291.. code:: 292 293 $ sudo yum install docker 294 $ # or `apt-get install docker` for Ubuntu, etc. 295 $ sudo systemctl start docker 296 $ sudo docker ps 297 298The last command should print an empty table, to verify the system is ready. 299 300An alternative method to set up permissions is by adding the current user to 301"docker" group and making the docker daemon socket file (by default 302``/var/run/docker.sock``) accessible to the group: 303 304.. code:: 305 306 $ sudo groupadd docker 307 $ sudo usermod $USER -a -G docker 308 $ sudo chown :docker /var/run/docker.sock 309 310Note that any one of above configurations makes it possible for the user to 311exploit the whole host with Docker bind mounting or other privileged 312operations. So only do it on development machines. 313 314Quickstart 315---------- 316 317From source tree, type ``make docker`` to see the help. Testing can be started 318without configuring or building QEMU (``configure`` and ``make`` are done in 319the container, with parameters defined by the make target): 320 321.. code:: 322 323 make docker-test-build@min-glib 324 325This will create a container instance using the ``min-glib`` image (the image 326is downloaded and initialized automatically), in which the ``test-build`` job 327is executed. 328 329Images 330------ 331 332Along with many other images, the ``min-glib`` image is defined in a Dockerfile 333in ``tests/docker/dockerfiles/``, called ``min-glib.docker``. ``make docker`` 334command will list all the available images. 335 336To add a new image, simply create a new ``.docker`` file under the 337``tests/docker/dockerfiles/`` directory. 338 339A ``.pre`` script can be added beside the ``.docker`` file, which will be 340executed before building the image under the build context directory. This is 341mainly used to do necessary host side setup. One such setup is ``binfmt_misc``, 342for example, to make qemu-user powered cross build containers work. 343 344Tests 345----- 346 347Different tests are added to cover various configurations to build and test 348QEMU. Docker tests are the executables under ``tests/docker`` named 349``test-*``. They are typically shell scripts and are built on top of a shell 350library, ``tests/docker/common.rc``, which provides helpers to find the QEMU 351source and build it. 352 353The full list of tests is printed in the ``make docker`` help. 354 355Tools 356----- 357 358There are executables that are created to run in a specific Docker environment. 359This makes it easy to write scripts that have heavy or special dependencies, 360but are still very easy to use. 361 362Currently the only tool is ``travis``, which mimics the Travis-CI tests in a 363container. It runs in the ``travis`` image: 364 365.. code:: 366 367 make docker-travis@travis 368 369Debugging a Docker test failure 370------------------------------- 371 372When CI tasks, maintainers or yourself report a Docker test failure, follow the 373below steps to debug it: 374 3751. Locally reproduce the failure with the reported command line. E.g. run 376 ``make docker-test-mingw@fedora J=8``. 3772. Add "V=1" to the command line, try again, to see the verbose output. 3783. Further add "DEBUG=1" to the command line. This will pause in a shell prompt 379 in the container right before testing starts. You could either manually 380 build QEMU and run tests from there, or press Ctrl-D to let the Docker 381 testing continue. 3824. If you press Ctrl-D, the same building and testing procedure will begin, and 383 will hopefully run into the error again. After that, you will be dropped to 384 the prompt for debug. 385 386Options 387------- 388 389Various options can be used to affect how Docker tests are done. The full 390list is in the ``make docker`` help text. The frequently used ones are: 391 392* ``V=1``: the same as in top level ``make``. It will be propagated to the 393 container and enable verbose output. 394* ``J=$N``: the number of parallel tasks in make commands in the container, 395 similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in 396 top level ``make`` will not be propagated into the container.) 397* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test 398 failure" section. 399 400VM testing 401========== 402 403This test suite contains scripts that bootstrap various guest images that have 404necessary packages to build QEMU. The basic usage is documented in ``Makefile`` 405help which is displayed with ``make vm-help``. 406 407Quickstart 408---------- 409 410Run ``make vm-help`` to list available make targets. Invoke a specific make 411command to run build test in an image. For example, ``make vm-build-freebsd`` 412will build the source tree in the FreeBSD image. The command can be executed 413from either the source tree or the build dir; if the former, ``./configure`` is 414not needed. The command will then generate the test image in ``./tests/vm/`` 415under the working directory. 416 417Note: images created by the scripts accept a well-known RSA key pair for SSH 418access, so they SHOULD NOT be exposed to external interfaces if you are 419concerned about attackers taking control of the guest and potentially 420exploiting a QEMU security bug to compromise the host. 421 422QEMU binaries 423------------- 424 425By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there 426isn't one, or if it is older than 2.10, the test won't work. In this case, 427provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``. 428 429Likewise the path to qemu-img can be set in QEMU_IMG environment variable. 430 431Make jobs 432--------- 433 434The ``-j$X`` option in the make command line is not propagated into the VM, 435specify ``J=$X`` to control the make jobs in the guest. 436 437Debugging 438--------- 439 440Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive 441debugging and verbose output. If this is not enough, see the next section. 442``V=1`` will be propagated down into the make jobs in the guest. 443 444Manual invocation 445----------------- 446 447Each guest script is an executable script with the same command line options. 448For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``: 449 450.. code:: 451 452 $ cd $QEMU_SRC/tests/vm 453 454 # To bootstrap the image 455 $ ./netbsd --build-image --image /var/tmp/netbsd.img 456 <...> 457 458 # To run an arbitrary command in guest (the output will not be echoed unless 459 # --debug is added) 460 $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a 461 462 # To build QEMU in guest 463 $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC 464 465 # To get to an interactive shell 466 $ ./netbsd --interactive --image /var/tmp/netbsd.img sh 467 468Adding new guests 469----------------- 470 471Please look at existing guest scripts for how to add new guests. 472 473Most importantly, create a subclass of BaseVM and implement ``build_image()`` 474method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from 475the script's ``main()``. 476 477* Usually in ``build_image()``, a template image is downloaded from a 478 predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and 479 the checksum, so consider using it. 480 481* Once the image is downloaded, users, SSH server and QEMU build deps should 482 be set up: 483 484 - Root password set to ``BaseVM.ROOT_PASS`` 485 - User ``BaseVM.GUEST_USER`` is created, and password set to 486 ``BaseVM.GUEST_PASS`` 487 - SSH service is enabled and started on boot, 488 ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys`` 489 file of both root and the normal user 490 - DHCP client service is enabled and started on boot, so that it can 491 automatically configure the virtio-net-pci NIC and communicate with QEMU 492 user net (10.0.2.2) 493 - Necessary packages are installed to untar the source tarball and build 494 QEMU 495 496* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that 497 untars a raw virtio-blk block device, which is the tarball data blob of the 498 QEMU source tree, then configure/build it. Running "make check" is also 499 recommended. 500 501Image fuzzer testing 502==================== 503 504An image fuzzer was added to exercise format drivers. Currently only qcow2 is 505supported. To start the fuzzer, run 506 507.. code:: 508 509 tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2 510 511Alternatively, some command different from "qemu-img info" can be tested, by 512changing the ``-c`` option. 513 514Acceptance tests using the Avocado Framework 515============================================ 516 517The ``tests/acceptance`` directory hosts functional tests, also known 518as acceptance level tests. They're usually higher level tests, and 519may interact with external resources and with various guest operating 520systems. 521 522These tests are written using the Avocado Testing Framework (which must 523be installed separately) in conjunction with a the ``avocado_qemu.Test`` 524class, implemented at ``tests/acceptance/avocado_qemu``. 525 526Tests based on ``avocado_qemu.Test`` can easily: 527 528 * Customize the command line arguments given to the convenience 529 ``self.vm`` attribute (a QEMUMachine instance) 530 531 * Interact with the QEMU monitor, send QMP commands and check 532 their results 533 534 * Interact with the guest OS, using the convenience console device 535 (which may be useful to assert the effectiveness and correctness of 536 command line arguments or QMP commands) 537 538 * Interact with external data files that accompany the test itself 539 (see ``self.get_data()``) 540 541 * Download (and cache) remote data files, such as firmware and kernel 542 images 543 544 * Have access to a library of guest OS images (by means of the 545 ``avocado.utils.vmimage`` library) 546 547 * Make use of various other test related utilities available at the 548 test class itself and at the utility library: 549 550 - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test 551 - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html 552 553Running tests 554------------- 555 556You can run the acceptance tests simply by executing: 557 558.. code:: 559 560 make check-acceptance 561 562This involves the automatic creation of Python virtual environment 563within the build tree (at ``tests/venv``) which will have all the 564right dependencies, and will save tests results also within the 565build tree (at ``tests/results``). 566 567Note: the build environment must be using a Python 3 stack, and have 568the ``venv`` and ``pip`` packages installed. If necessary, make sure 569``configure`` is called with ``--python=`` and that those modules are 570available. On Debian and Ubuntu based systems, depending on the 571specific version, they may be on packages named ``python3-venv`` and 572``python3-pip``. 573 574The scripts installed inside the virtual environment may be used 575without an "activation". For instance, the Avocado test runner 576may be invoked by running: 577 578 .. code:: 579 580 tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/ 581 582Manual Installation 583------------------- 584 585To manually install Avocado and its dependencies, run: 586 587.. code:: 588 589 pip install --user avocado-framework 590 591Alternatively, follow the instructions on this link: 592 593 http://avocado-framework.readthedocs.io/en/latest/GetStartedGuide.html#installing-avocado 594 595Overview 596-------- 597 598The ``tests/acceptance/avocado_qemu`` directory provides the 599``avocado_qemu`` Python module, containing the ``avocado_qemu.Test`` 600class. Here's a simple usage example: 601 602.. code:: 603 604 from avocado_qemu import Test 605 606 607 class Version(Test): 608 """ 609 :avocado: tags=quick 610 """ 611 def test_qmp_human_info_version(self): 612 self.vm.launch() 613 res = self.vm.command('human-monitor-command', 614 command_line='info version') 615 self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)') 616 617To execute your test, run: 618 619.. code:: 620 621 avocado run version.py 622 623Tests may be classified according to a convention by using docstring 624directives such as ``:avocado: tags=TAG1,TAG2``. To run all tests 625in the current directory, tagged as "quick", run: 626 627.. code:: 628 629 avocado run -t quick . 630 631The ``avocado_qemu.Test`` base test class 632----------------------------------------- 633 634The ``avocado_qemu.Test`` class has a number of characteristics that 635are worth being mentioned right away. 636 637First of all, it attempts to give each test a ready to use QEMUMachine 638instance, available at ``self.vm``. Because many tests will tweak the 639QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``) 640is left to the test writer. 641 642The base test class has also support for tests with more than one 643QEMUMachine. The way to get machines is through the ``self.get_vm()`` 644method which will return a QEMUMachine instance. The ``self.get_vm()`` 645method accepts arguments that will be passed to the QEMUMachine creation 646and also an optional `name` attribute so you can identify a specific 647machine and get it more than once through the tests methods. A simple 648and hypothetical example follows: 649 650.. code:: 651 652 from avocado_qemu import Test 653 654 655 class MultipleMachines(Test): 656 """ 657 :avocado: enable 658 """ 659 def test_multiple_machines(self): 660 first_machine = self.get_vm() 661 second_machine = self.get_vm() 662 self.get_vm(name='third_machine').launch() 663 664 first_machine.launch() 665 second_machine.launch() 666 667 first_res = first_machine.command( 668 'human-monitor-command', 669 command_line='info version') 670 671 second_res = second_machine.command( 672 'human-monitor-command', 673 command_line='info version') 674 675 third_res = self.get_vm(name='third_machine').command( 676 'human-monitor-command', 677 command_line='info version') 678 679 self.assertEquals(first_res, second_res, third_res) 680 681At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines 682shutdown. 683 684QEMUMachine 685~~~~~~~~~~~ 686 687The QEMUMachine API is already widely used in the Python iotests, 688device-crash-test and other Python scripts. It's a wrapper around the 689execution of a QEMU binary, giving its users: 690 691 * the ability to set command line arguments to be given to the QEMU 692 binary 693 694 * a ready to use QMP connection and interface, which can be used to 695 send commands and inspect its results, as well as asynchronous 696 events 697 698 * convenience methods to set commonly used command line arguments in 699 a more succinct and intuitive way 700 701QEMU binary selection 702~~~~~~~~~~~~~~~~~~~~~ 703 704The QEMU binary used for the ``self.vm`` QEMUMachine instance will 705primarily depend on the value of the ``qemu_bin`` parameter. If it's 706not explicitly set, its default value will be the result of a dynamic 707probe in the same source tree. A suitable binary will be one that 708targets the architecture matching host machine. 709 710Based on this description, test writers will usually rely on one of 711the following approaches: 712 7131) Set ``qemu_bin``, and use the given binary 714 7152) Do not set ``qemu_bin``, and use a QEMU binary named like 716 "${arch}-softmmu/qemu-system-${arch}", either in the current 717 working directory, or in the current source tree. 718 719The resulting ``qemu_bin`` value will be preserved in the 720``avocado_qemu.Test`` as an attribute with the same name. 721 722Attribute reference 723------------------- 724 725Besides the attributes and methods that are part of the base 726``avocado.Test`` class, the following attributes are available on any 727``avocado_qemu.Test`` instance. 728 729vm 730~~ 731 732A QEMUMachine instance, initially configured according to the given 733``qemu_bin`` parameter. 734 735arch 736~~~~ 737 738The architecture can be used on different levels of the stack, e.g. by 739the framework or by the test itself. At the framework level, it will 740currently influence the selection of a QEMU binary (when one is not 741explicitly given). 742 743Tests are also free to use this attribute value, for their own needs. 744A test may, for instance, use the same value when selecting the 745architecture of a kernel or disk image to boot a VM with. 746 747The ``arch`` attribute will be set to the test parameter of the same 748name. If one is not given explicitly, it will either be set to 749``None``, or, if the test is tagged with one (and only one) 750``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``. 751 752machine 753~~~~~~~ 754 755The machine type that will be set to all QEMUMachine instances created 756by the test. 757 758The ``machine`` attribute will be set to the test parameter of the same 759name. If one is not given explicitly, it will either be set to 760``None``, or, if the test is tagged with one (and only one) 761``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``. 762 763qemu_bin 764~~~~~~~~ 765 766The preserved value of the ``qemu_bin`` parameter or the result of the 767dynamic probe for a QEMU binary in the current working directory or 768source tree. 769 770Parameter reference 771------------------- 772 773To understand how Avocado parameters are accessed by tests, and how 774they can be passed to tests, please refer to:: 775 776 http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#accessing-test-parameters 777 778Parameter values can be easily seen in the log files, and will look 779like the following: 780 781.. code:: 782 783 PARAMS (key=qemu_bin, path=*, default=x86_64-softmmu/qemu-system-x86_64) => 'x86_64-softmmu/qemu-system-x86_64 784 785arch 786~~~~ 787 788The architecture that will influence the selection of a QEMU binary 789(when one is not explicitly given). 790 791Tests are also free to use this parameter value, for their own needs. 792A test may, for instance, use the same value when selecting the 793architecture of a kernel or disk image to boot a VM with. 794 795This parameter has a direct relation with the ``arch`` attribute. If 796not given, it will default to None. 797 798machine 799~~~~~~~ 800 801The machine type that will be set to all QEMUMachine instances created 802by the test. 803 804 805qemu_bin 806~~~~~~~~ 807 808The exact QEMU binary to be used on QEMUMachine. 809 810Uninstalling Avocado 811-------------------- 812 813If you've followed the manual installation instructions above, you can 814easily uninstall Avocado. Start by listing the packages you have 815installed:: 816 817 pip list --user 818 819And remove any package you want with:: 820 821 pip uninstall <package_name> 822 823If you've used ``make check-acceptance``, the Python virtual environment where 824Avocado is installed will be cleaned up as part of ``make check-clean``. 825 826Testing with "make check-tcg" 827============================= 828 829The check-tcg tests are intended for simple smoke tests of both 830linux-user and softmmu TCG functionality. However to build test 831programs for guest targets you need to have cross compilers available. 832If your distribution supports cross compilers you can do something as 833simple as:: 834 835 apt install gcc-aarch64-linux-gnu 836 837The configure script will automatically pick up their presence. 838Sometimes compilers have slightly odd names so the availability of 839them can be prompted by passing in the appropriate configure option 840for the architecture in question, for example:: 841 842 $(configure) --cross-cc-aarch64=aarch64-cc 843 844There is also a ``--cross-cc-flags-ARCH`` flag in case additional 845compiler flags are needed to build for a given target. 846 847If you have the ability to run containers as the user you can also 848take advantage of the build systems "Docker" support. It will then use 849containers to build any test case for an enabled guest where there is 850no system compiler available. See :ref: `_docker-ref` for details. 851 852Running subset of tests 853----------------------- 854 855You can build the tests for one architecture:: 856 857 make build-tcg-tests-$TARGET 858 859And run with:: 860 861 make run-tcg-tests-$TARGET 862 863Adding ``V=1`` to the invocation will show the details of how to 864invoke QEMU for the test which is useful for debugging tests. 865 866TCG test dependencies 867--------------------- 868 869The TCG tests are deliberately very light on dependencies and are 870either totally bare with minimal gcc lib support (for softmmu tests) 871or just glibc (for linux-user tests). This is because getting a cross 872compiler to work with additional libraries can be challenging. 873 874Other TCG Tests 875--------------- 876 877There are a number of out-of-tree test suites that are used for more 878extensive testing of processor features. 879 880KVM Unit Tests 881~~~~~~~~~~~~~~ 882 883The KVM unit tests are designed to run as a Guest OS under KVM but 884there is no reason why they can't exercise the TCG as well. It 885provides a minimal OS kernel with hooks for enabling the MMU as well 886as reporting test results via a special device:: 887 888 https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git 889 890Linux Test Project 891~~~~~~~~~~~~~~~~~~ 892 893The LTP is focused on exercising the syscall interface of a Linux 894kernel. It checks that syscalls behave as documented and strives to 895exercise as many corner cases as possible. It is a useful test suite 896to run to exercise QEMU's linux-user code:: 897 898 https://linux-test-project.github.io/ 899