xref: /openbmc/qemu/docs/devel/testing.rst (revision 6f03770d)
1===============
2Testing in QEMU
3===============
4
5This document describes the testing infrastructure in QEMU.
6
7Testing with "make check"
8=========================
9
10The "make check" testing family includes most of the C based tests in QEMU. For
11a quick help, run ``make check-help`` from the source tree.
12
13The usual way to run these tests is:
14
15.. code::
16
17  make check
18
19which includes QAPI schema tests, unit tests, QTests and some iotests.
20Different sub-types of "make check" tests will be explained below.
21
22Before running tests, it is best to build QEMU programs first. Some tests
23expect the executables to exist and will fail with obscure messages if they
24cannot find them.
25
26Unit tests
27----------
28
29Unit tests, which can be invoked with ``make check-unit``, are simple C tests
30that typically link to individual QEMU object files and exercise them by
31calling exported functions.
32
33If you are writing new code in QEMU, consider adding a unit test, especially
34for utility modules that are relatively stateless or have few dependencies. To
35add a new unit test:
36
371. Create a new source file. For example, ``tests/foo-test.c``.
38
392. Write the test. Normally you would include the header file which exports
40   the module API, then verify the interface behaves as expected from your
41   test. The test code should be organized with the glib testing framework.
42   Copying and modifying an existing test is usually a good idea.
43
443. Add the test to ``tests/meson.build``. The unit tests are listed in a
45   dictionary called ``tests``.  The values are any additional sources and
46   dependencies to be linked with the test.  For a simple test whose source
47   is in ``tests/foo-test.c``, it is enough to add an entry like::
48
49     {
50       ...
51       'foo-test': [],
52       ...
53     }
54
55Since unit tests don't require environment variables, the simplest way to debug
56a unit test failure is often directly invoking it or even running it under
57``gdb``. However there can still be differences in behavior between ``make``
58invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
59variable (which affects memory reclamation and catches invalid pointers better)
60and gtester options. If necessary, you can run
61
62.. code::
63
64  make check-unit V=1
65
66and copy the actual command line which executes the unit test, then run
67it from the command line.
68
69QTest
70-----
71
72QTest is a device emulation testing framework.  It can be very useful to test
73device models; it could also control certain aspects of QEMU (such as virtual
74clock stepping), with a special purpose "qtest" protocol.  Refer to
75:doc:`qtest` for more details.
76
77QTest cases can be executed with
78
79.. code::
80
81   make check-qtest
82
83QAPI schema tests
84-----------------
85
86The QAPI schema tests validate the QAPI parser used by QMP, by feeding
87predefined input to the parser and comparing the result with the reference
88output.
89
90The input/output data is managed under the ``tests/qapi-schema`` directory.
91Each test case includes four files that have a common base name:
92
93  * ``${casename}.json`` - the file contains the JSON input for feeding the
94    parser
95  * ``${casename}.out`` - the file contains the expected stdout from the parser
96  * ``${casename}.err`` - the file contains the expected stderr from the parser
97  * ``${casename}.exit`` - the expected error code
98
99Consider adding a new QAPI schema test when you are making a change on the QAPI
100parser (either fixing a bug or extending/modifying the syntax). To do this:
101
1021. Add four files for the new case as explained above. For example:
103
104  ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
105
1062. Add the new test in ``tests/Makefile.include``. For example:
107
108  ``qapi-schema += foo.json``
109
110check-block
111-----------
112
113``make check-block`` runs a subset of the block layer iotests (the tests that
114are in the "auto" group).
115See the "QEMU iotests" section below for more information.
116
117GCC gcov support
118----------------
119
120``gcov`` is a GCC tool to analyze the testing coverage by
121instrumenting the tested code. To use it, configure QEMU with
122``--enable-gcov`` option and build. Then run ``make check`` as usual.
123
124If you want to gather coverage information on a single test the ``make
125clean-gcda`` target can be used to delete any existing coverage
126information before running a single test.
127
128You can generate a HTML coverage report by executing ``make
129coverage-html`` which will create
130``meson-logs/coveragereport/index.html``.
131
132Further analysis can be conducted by running the ``gcov`` command
133directly on the various .gcda output files. Please read the ``gcov``
134documentation for more information.
135
136QEMU iotests
137============
138
139QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
140framework widely used to test block layer related features. It is higher level
141than "make check" tests and 99% of the code is written in bash or Python
142scripts.  The testing success criteria is golden output comparison, and the
143test files are named with numbers.
144
145To run iotests, make sure QEMU is built successfully, then switch to the
146``tests/qemu-iotests`` directory under the build directory, and run ``./check``
147with desired arguments from there.
148
149By default, "raw" format and "file" protocol is used; all tests will be
150executed, except the unsupported ones. You can override the format and protocol
151with arguments:
152
153.. code::
154
155  # test with qcow2 format
156  ./check -qcow2
157  # or test a different protocol
158  ./check -nbd
159
160It's also possible to list test numbers explicitly:
161
162.. code::
163
164  # run selected cases with qcow2 format
165  ./check -qcow2 001 030 153
166
167Cache mode can be selected with the "-c" option, which may help reveal bugs
168that are specific to certain cache mode.
169
170More options are supported by the ``./check`` script, run ``./check -h`` for
171help.
172
173Writing a new test case
174-----------------------
175
176Consider writing a tests case when you are making any changes to the block
177layer. An iotest case is usually the choice for that. There are already many
178test cases, so it is possible that extending one of them may achieve the goal
179and save the boilerplate to create one.  (Unfortunately, there isn't a 100%
180reliable way to find a related one out of hundreds of tests.  One approach is
181using ``git grep``.)
182
183Usually an iotest case consists of two files. One is an executable that
184produces output to stdout and stderr, the other is the expected reference
185output. They are given the same number in file names. E.g. Test script ``055``
186and reference output ``055.out``.
187
188In rare cases, when outputs differ between cache mode ``none`` and others, a
189``.out.nocache`` file is added. In other cases, when outputs differ between
190image formats, more than one ``.out`` files are created ending with the
191respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
192
193There isn't a hard rule about how to write a test script, but a new test is
194usually a (copy and) modification of an existing case.  There are a few
195commonly used ways to create a test:
196
197* A Bash script. It will make use of several environmental variables related
198  to the testing procedure, and could source a group of ``common.*`` libraries
199  for some common helper routines.
200
201* A Python unittest script. Import ``iotests`` and create a subclass of
202  ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
203  this approach is that the output is too scarce, and the script is considered
204  harder to debug.
205
206* A simple Python script without using unittest module. This could also import
207  ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
208  from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
209  execution. This is a combination of 1 and 2.
210
211Pick the language per your preference since both Bash and Python have
212comparable library support for invoking and interacting with QEMU programs. If
213you opt for Python, it is strongly recommended to write Python 3 compatible
214code.
215
216Both Python and Bash frameworks in iotests provide helpers to manage test
217images. They can be used to create and clean up images under the test
218directory. If no I/O or any protocol specific feature is needed, it is often
219more convenient to use the pseudo block driver, ``null-co://``, as the test
220image, which doesn't require image creation or cleaning up. Avoid system-wide
221devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
222Otherwise, image locking implications have to be considered.  For example,
223another application on the host may have locked the file, possibly leading to a
224test failure.  If using such devices are explicitly desired, consider adding
225``locking=off`` option to disable image locking.
226
227Test case groups
228----------------
229
230"Tests may belong to one or more test groups, which are defined in the form
231of a comment in the test source file. By convention, test groups are listed
232in the second line of the test file, after the "#!/..." line, like this:
233
234.. code::
235
236  #!/usr/bin/env python3
237  # group: auto quick
238  #
239  ...
240
241Another way of defining groups is creating the tests/qemu-iotests/group.local
242file. This should be used only for downstream (this file should never appear
243in upstream). This file may be used for defining some downstream test groups
244or for temporarily disabling tests, like this:
245
246.. code::
247
248  # groups for some company downstream process
249  #
250  # ci - tests to run on build
251  # down - our downstream tests, not for upstream
252  #
253  # Format of each line is:
254  # TEST_NAME TEST_GROUP [TEST_GROUP ]...
255
256  013 ci
257  210 disabled
258  215 disabled
259  our-ugly-workaround-test down ci
260
261Note that the following group names have a special meaning:
262
263- quick: Tests in this group should finish within a few seconds.
264
265- auto: Tests in this group are used during "make check" and should be
266  runnable in any case. That means they should run with every QEMU binary
267  (also non-x86), with every QEMU configuration (i.e. must not fail if
268  an optional feature is not compiled in - but reporting a "skip" is ok),
269  work at least with the qcow2 file format, work with all kind of host
270  filesystems and users (e.g. "nobody" or "root") and must not take too
271  much memory and disk space (since CI pipelines tend to fail otherwise).
272
273- disabled: Tests in this group are disabled and ignored by check.
274
275.. _container-ref:
276
277Container based tests
278=====================
279
280Introduction
281------------
282
283The container testing framework in QEMU utilizes public images to
284build and test QEMU in predefined and widely accessible Linux
285environments. This makes it possible to expand the test coverage
286across distros, toolchain flavors and library versions. The support
287was originally written for Docker although we also support Podman as
288an alternative container runtime. Although the many of the target
289names and scripts are prefixed with "docker" the system will
290automatically run on whichever is configured.
291
292The container images are also used to augment the generation of tests
293for testing TCG. See :ref:`checktcg-ref` for more details.
294
295Docker Prerequisites
296--------------------
297
298Install "docker" with the system package manager and start the Docker service
299on your development machine, then make sure you have the privilege to run
300Docker commands. Typically it means setting up passwordless ``sudo docker``
301command or login as root. For example:
302
303.. code::
304
305  $ sudo yum install docker
306  $ # or `apt-get install docker` for Ubuntu, etc.
307  $ sudo systemctl start docker
308  $ sudo docker ps
309
310The last command should print an empty table, to verify the system is ready.
311
312An alternative method to set up permissions is by adding the current user to
313"docker" group and making the docker daemon socket file (by default
314``/var/run/docker.sock``) accessible to the group:
315
316.. code::
317
318  $ sudo groupadd docker
319  $ sudo usermod $USER -a -G docker
320  $ sudo chown :docker /var/run/docker.sock
321
322Note that any one of above configurations makes it possible for the user to
323exploit the whole host with Docker bind mounting or other privileged
324operations.  So only do it on development machines.
325
326Podman Prerequisites
327--------------------
328
329Install "podman" with the system package manager.
330
331.. code::
332
333  $ sudo dnf install podman
334  $ podman ps
335
336The last command should print an empty table, to verify the system is ready.
337
338Quickstart
339----------
340
341From source tree, type ``make docker-help`` to see the help. Testing
342can be started without configuring or building QEMU (``configure`` and
343``make`` are done in the container, with parameters defined by the
344make target):
345
346.. code::
347
348  make docker-test-build@centos8
349
350This will create a container instance using the ``centos8`` image (the image
351is downloaded and initialized automatically), in which the ``test-build`` job
352is executed.
353
354Registry
355--------
356
357The QEMU project has a container registry hosted by GitLab at
358``registry.gitlab.com/qemu-project/qemu`` which will automatically be
359used to pull in pre-built layers. This avoids unnecessary strain on
360the distro archives created by multiple developers running the same
361container build steps over and over again. This can be overridden
362locally by using the ``NOCACHE`` build option:
363
364.. code::
365
366   make docker-image-debian10 NOCACHE=1
367
368Images
369------
370
371Along with many other images, the ``centos8`` image is defined in a Dockerfile
372in ``tests/docker/dockerfiles/``, called ``centos8.docker``. ``make docker-help``
373command will list all the available images.
374
375To add a new image, simply create a new ``.docker`` file under the
376``tests/docker/dockerfiles/`` directory.
377
378A ``.pre`` script can be added beside the ``.docker`` file, which will be
379executed before building the image under the build context directory. This is
380mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
381for example, to make qemu-user powered cross build containers work.
382
383Tests
384-----
385
386Different tests are added to cover various configurations to build and test
387QEMU.  Docker tests are the executables under ``tests/docker`` named
388``test-*``. They are typically shell scripts and are built on top of a shell
389library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
390source and build it.
391
392The full list of tests is printed in the ``make docker-help`` help.
393
394Debugging a Docker test failure
395-------------------------------
396
397When CI tasks, maintainers or yourself report a Docker test failure, follow the
398below steps to debug it:
399
4001. Locally reproduce the failure with the reported command line. E.g. run
401   ``make docker-test-mingw@fedora J=8``.
4022. Add "V=1" to the command line, try again, to see the verbose output.
4033. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
404   in the container right before testing starts. You could either manually
405   build QEMU and run tests from there, or press Ctrl-D to let the Docker
406   testing continue.
4074. If you press Ctrl-D, the same building and testing procedure will begin, and
408   will hopefully run into the error again. After that, you will be dropped to
409   the prompt for debug.
410
411Options
412-------
413
414Various options can be used to affect how Docker tests are done. The full
415list is in the ``make docker`` help text. The frequently used ones are:
416
417* ``V=1``: the same as in top level ``make``. It will be propagated to the
418  container and enable verbose output.
419* ``J=$N``: the number of parallel tasks in make commands in the container,
420  similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
421  top level ``make`` will not be propagated into the container.)
422* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
423  failure" section.
424
425Thread Sanitizer
426================
427
428Thread Sanitizer (TSan) is a tool which can detect data races.  QEMU supports
429building and testing with this tool.
430
431For more information on TSan:
432
433https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
434
435Thread Sanitizer in Docker
436---------------------------
437TSan is currently supported in the ubuntu2004 docker.
438
439The test-tsan test will build using TSan and then run make check.
440
441.. code::
442
443  make docker-test-tsan@ubuntu2004
444
445TSan warnings under docker are placed in files located at build/tsan/.
446
447We recommend using DEBUG=1 to allow launching the test from inside the docker,
448and to allow review of the warnings generated by TSan.
449
450Building and Testing with TSan
451------------------------------
452
453It is possible to build and test with TSan, with a few additional steps.
454These steps are normally done automatically in the docker.
455
456There is a one time patch needed in clang-9 or clang-10 at this time:
457
458.. code::
459
460  sed -i 's/^const/static const/g' \
461      /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
462
463To configure the build for TSan:
464
465.. code::
466
467  ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
468               --disable-werror --extra-cflags="-O0"
469
470The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
471variable.
472
473More information on the TSAN_OPTIONS can be found here:
474
475https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
476
477For example:
478
479.. code::
480
481  export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
482                      detect_deadlocks=false history_size=7 exitcode=0 \
483                      log_path=<build path>/tsan/tsan_warning
484
485The above exitcode=0 has TSan continue without error if any warnings are found.
486This allows for running the test and then checking the warnings afterwards.
487If you want TSan to stop and exit with error on warnings, use exitcode=66.
488
489TSan Suppressions
490-----------------
491Keep in mind that for any data race warning, although there might be a data race
492detected by TSan, there might be no actual bug here.  TSan provides several
493different mechanisms for suppressing warnings.  In general it is recommended
494to fix the code if possible to eliminate the data race rather than suppress
495the warning.
496
497A few important files for suppressing warnings are:
498
499tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
500The comment on each suppression will typically indicate why we are
501suppressing it.  More information on the file format can be found here:
502
503https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
504
505tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
506at compile time for test or debug.
507Add flags to configure to enable:
508
509"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
510
511More information on the file format can be found here under "Blacklist Format":
512
513https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
514
515TSan Annotations
516----------------
517include/qemu/tsan.h defines annotations.  See this file for more descriptions
518of the annotations themselves.  Annotations can be used to suppress
519TSan warnings or give TSan more information so that it can detect proper
520relationships between accesses of data.
521
522Annotation examples can be found here:
523
524https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
525
526Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
527
528The full set of annotations can be found here:
529
530https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
531
532VM testing
533==========
534
535This test suite contains scripts that bootstrap various guest images that have
536necessary packages to build QEMU. The basic usage is documented in ``Makefile``
537help which is displayed with ``make vm-help``.
538
539Quickstart
540----------
541
542Run ``make vm-help`` to list available make targets. Invoke a specific make
543command to run build test in an image. For example, ``make vm-build-freebsd``
544will build the source tree in the FreeBSD image. The command can be executed
545from either the source tree or the build dir; if the former, ``./configure`` is
546not needed. The command will then generate the test image in ``./tests/vm/``
547under the working directory.
548
549Note: images created by the scripts accept a well-known RSA key pair for SSH
550access, so they SHOULD NOT be exposed to external interfaces if you are
551concerned about attackers taking control of the guest and potentially
552exploiting a QEMU security bug to compromise the host.
553
554QEMU binaries
555-------------
556
557By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there
558isn't one, or if it is older than 2.10, the test won't work. In this case,
559provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
560
561Likewise the path to qemu-img can be set in QEMU_IMG environment variable.
562
563Make jobs
564---------
565
566The ``-j$X`` option in the make command line is not propagated into the VM,
567specify ``J=$X`` to control the make jobs in the guest.
568
569Debugging
570---------
571
572Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
573debugging and verbose output. If this is not enough, see the next section.
574``V=1`` will be propagated down into the make jobs in the guest.
575
576Manual invocation
577-----------------
578
579Each guest script is an executable script with the same command line options.
580For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
581
582.. code::
583
584    $ cd $QEMU_SRC/tests/vm
585
586    # To bootstrap the image
587    $ ./netbsd --build-image --image /var/tmp/netbsd.img
588    <...>
589
590    # To run an arbitrary command in guest (the output will not be echoed unless
591    # --debug is added)
592    $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
593
594    # To build QEMU in guest
595    $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
596
597    # To get to an interactive shell
598    $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
599
600Adding new guests
601-----------------
602
603Please look at existing guest scripts for how to add new guests.
604
605Most importantly, create a subclass of BaseVM and implement ``build_image()``
606method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
607the script's ``main()``.
608
609* Usually in ``build_image()``, a template image is downloaded from a
610  predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
611  the checksum, so consider using it.
612
613* Once the image is downloaded, users, SSH server and QEMU build deps should
614  be set up:
615
616  - Root password set to ``BaseVM.ROOT_PASS``
617  - User ``BaseVM.GUEST_USER`` is created, and password set to
618    ``BaseVM.GUEST_PASS``
619  - SSH service is enabled and started on boot,
620    ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
621    file of both root and the normal user
622  - DHCP client service is enabled and started on boot, so that it can
623    automatically configure the virtio-net-pci NIC and communicate with QEMU
624    user net (10.0.2.2)
625  - Necessary packages are installed to untar the source tarball and build
626    QEMU
627
628* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
629  untars a raw virtio-blk block device, which is the tarball data blob of the
630  QEMU source tree, then configure/build it. Running "make check" is also
631  recommended.
632
633Image fuzzer testing
634====================
635
636An image fuzzer was added to exercise format drivers. Currently only qcow2 is
637supported. To start the fuzzer, run
638
639.. code::
640
641  tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
642
643Alternatively, some command different from "qemu-img info" can be tested, by
644changing the ``-c`` option.
645
646Acceptance tests using the Avocado Framework
647============================================
648
649The ``tests/acceptance`` directory hosts functional tests, also known
650as acceptance level tests.  They're usually higher level tests, and
651may interact with external resources and with various guest operating
652systems.
653
654These tests are written using the Avocado Testing Framework (which must
655be installed separately) in conjunction with a the ``avocado_qemu.Test``
656class, implemented at ``tests/acceptance/avocado_qemu``.
657
658Tests based on ``avocado_qemu.Test`` can easily:
659
660 * Customize the command line arguments given to the convenience
661   ``self.vm`` attribute (a QEMUMachine instance)
662
663 * Interact with the QEMU monitor, send QMP commands and check
664   their results
665
666 * Interact with the guest OS, using the convenience console device
667   (which may be useful to assert the effectiveness and correctness of
668   command line arguments or QMP commands)
669
670 * Interact with external data files that accompany the test itself
671   (see ``self.get_data()``)
672
673 * Download (and cache) remote data files, such as firmware and kernel
674   images
675
676 * Have access to a library of guest OS images (by means of the
677   ``avocado.utils.vmimage`` library)
678
679 * Make use of various other test related utilities available at the
680   test class itself and at the utility library:
681
682   - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
683   - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
684
685Running tests
686-------------
687
688You can run the acceptance tests simply by executing:
689
690.. code::
691
692  make check-acceptance
693
694This involves the automatic creation of Python virtual environment
695within the build tree (at ``tests/venv``) which will have all the
696right dependencies, and will save tests results also within the
697build tree (at ``tests/results``).
698
699Note: the build environment must be using a Python 3 stack, and have
700the ``venv`` and ``pip`` packages installed.  If necessary, make sure
701``configure`` is called with ``--python=`` and that those modules are
702available.  On Debian and Ubuntu based systems, depending on the
703specific version, they may be on packages named ``python3-venv`` and
704``python3-pip``.
705
706The scripts installed inside the virtual environment may be used
707without an "activation".  For instance, the Avocado test runner
708may be invoked by running:
709
710 .. code::
711
712  tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/
713
714Manual Installation
715-------------------
716
717To manually install Avocado and its dependencies, run:
718
719.. code::
720
721  pip install --user avocado-framework
722
723Alternatively, follow the instructions on this link:
724
725  https://avocado-framework.readthedocs.io/en/latest/guides/user/chapters/installing.html
726
727Overview
728--------
729
730The ``tests/acceptance/avocado_qemu`` directory provides the
731``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
732class.  Here's a simple usage example:
733
734.. code::
735
736  from avocado_qemu import Test
737
738
739  class Version(Test):
740      """
741      :avocado: tags=quick
742      """
743      def test_qmp_human_info_version(self):
744          self.vm.launch()
745          res = self.vm.command('human-monitor-command',
746                                command_line='info version')
747          self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
748
749To execute your test, run:
750
751.. code::
752
753  avocado run version.py
754
755Tests may be classified according to a convention by using docstring
756directives such as ``:avocado: tags=TAG1,TAG2``.  To run all tests
757in the current directory, tagged as "quick", run:
758
759.. code::
760
761  avocado run -t quick .
762
763The ``avocado_qemu.Test`` base test class
764-----------------------------------------
765
766The ``avocado_qemu.Test`` class has a number of characteristics that
767are worth being mentioned right away.
768
769First of all, it attempts to give each test a ready to use QEMUMachine
770instance, available at ``self.vm``.  Because many tests will tweak the
771QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
772is left to the test writer.
773
774The base test class has also support for tests with more than one
775QEMUMachine. The way to get machines is through the ``self.get_vm()``
776method which will return a QEMUMachine instance. The ``self.get_vm()``
777method accepts arguments that will be passed to the QEMUMachine creation
778and also an optional `name` attribute so you can identify a specific
779machine and get it more than once through the tests methods. A simple
780and hypothetical example follows:
781
782.. code::
783
784  from avocado_qemu import Test
785
786
787  class MultipleMachines(Test):
788      def test_multiple_machines(self):
789          first_machine = self.get_vm()
790          second_machine = self.get_vm()
791          self.get_vm(name='third_machine').launch()
792
793          first_machine.launch()
794          second_machine.launch()
795
796          first_res = first_machine.command(
797              'human-monitor-command',
798              command_line='info version')
799
800          second_res = second_machine.command(
801              'human-monitor-command',
802              command_line='info version')
803
804          third_res = self.get_vm(name='third_machine').command(
805              'human-monitor-command',
806              command_line='info version')
807
808          self.assertEquals(first_res, second_res, third_res)
809
810At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
811shutdown.
812
813QEMUMachine
814~~~~~~~~~~~
815
816The QEMUMachine API is already widely used in the Python iotests,
817device-crash-test and other Python scripts.  It's a wrapper around the
818execution of a QEMU binary, giving its users:
819
820 * the ability to set command line arguments to be given to the QEMU
821   binary
822
823 * a ready to use QMP connection and interface, which can be used to
824   send commands and inspect its results, as well as asynchronous
825   events
826
827 * convenience methods to set commonly used command line arguments in
828   a more succinct and intuitive way
829
830QEMU binary selection
831~~~~~~~~~~~~~~~~~~~~~
832
833The QEMU binary used for the ``self.vm`` QEMUMachine instance will
834primarily depend on the value of the ``qemu_bin`` parameter.  If it's
835not explicitly set, its default value will be the result of a dynamic
836probe in the same source tree.  A suitable binary will be one that
837targets the architecture matching host machine.
838
839Based on this description, test writers will usually rely on one of
840the following approaches:
841
8421) Set ``qemu_bin``, and use the given binary
843
8442) Do not set ``qemu_bin``, and use a QEMU binary named like
845   "qemu-system-${arch}", either in the current
846   working directory, or in the current source tree.
847
848The resulting ``qemu_bin`` value will be preserved in the
849``avocado_qemu.Test`` as an attribute with the same name.
850
851Attribute reference
852-------------------
853
854Besides the attributes and methods that are part of the base
855``avocado.Test`` class, the following attributes are available on any
856``avocado_qemu.Test`` instance.
857
858vm
859~~
860
861A QEMUMachine instance, initially configured according to the given
862``qemu_bin`` parameter.
863
864arch
865~~~~
866
867The architecture can be used on different levels of the stack, e.g. by
868the framework or by the test itself.  At the framework level, it will
869currently influence the selection of a QEMU binary (when one is not
870explicitly given).
871
872Tests are also free to use this attribute value, for their own needs.
873A test may, for instance, use the same value when selecting the
874architecture of a kernel or disk image to boot a VM with.
875
876The ``arch`` attribute will be set to the test parameter of the same
877name.  If one is not given explicitly, it will either be set to
878``None``, or, if the test is tagged with one (and only one)
879``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
880
881machine
882~~~~~~~
883
884The machine type that will be set to all QEMUMachine instances created
885by the test.
886
887The ``machine`` attribute will be set to the test parameter of the same
888name.  If one is not given explicitly, it will either be set to
889``None``, or, if the test is tagged with one (and only one)
890``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
891
892qemu_bin
893~~~~~~~~
894
895The preserved value of the ``qemu_bin`` parameter or the result of the
896dynamic probe for a QEMU binary in the current working directory or
897source tree.
898
899Parameter reference
900-------------------
901
902To understand how Avocado parameters are accessed by tests, and how
903they can be passed to tests, please refer to::
904
905  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#accessing-test-parameters
906
907Parameter values can be easily seen in the log files, and will look
908like the following:
909
910.. code::
911
912  PARAMS (key=qemu_bin, path=*, default=./qemu-system-x86_64) => './qemu-system-x86_64
913
914arch
915~~~~
916
917The architecture that will influence the selection of a QEMU binary
918(when one is not explicitly given).
919
920Tests are also free to use this parameter value, for their own needs.
921A test may, for instance, use the same value when selecting the
922architecture of a kernel or disk image to boot a VM with.
923
924This parameter has a direct relation with the ``arch`` attribute.  If
925not given, it will default to None.
926
927machine
928~~~~~~~
929
930The machine type that will be set to all QEMUMachine instances created
931by the test.
932
933
934qemu_bin
935~~~~~~~~
936
937The exact QEMU binary to be used on QEMUMachine.
938
939Skipping tests
940--------------
941The Avocado framework provides Python decorators which allow for easily skip
942tests running under certain conditions. For example, on the lack of a binary
943on the test system or when the running environment is a CI system. For further
944information about those decorators, please refer to::
945
946  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#skipping-tests
947
948While the conditions for skipping tests are often specifics of each one, there
949are recurring scenarios identified by the QEMU developers and the use of
950environment variables became a kind of standard way to enable/disable tests.
951
952Here is a list of the most used variables:
953
954AVOCADO_ALLOW_LARGE_STORAGE
955~~~~~~~~~~~~~~~~~~~~~~~~~~~
956Tests which are going to fetch or produce assets considered *large* are not
957going to run unless that `AVOCADO_ALLOW_LARGE_STORAGE=1` is exported on
958the environment.
959
960The definition of *large* is a bit arbitrary here, but it usually means an
961asset which occupies at least 1GB of size on disk when uncompressed.
962
963AVOCADO_ALLOW_UNTRUSTED_CODE
964~~~~~~~~~~~~~~~~~~~~~~~~~~~~
965There are tests which will boot a kernel image or firmware that can be
966considered not safe to run on the developer's workstation, thus they are
967skipped by default. The definition of *not safe* is also arbitrary but
968usually it means a blob which either its source or build process aren't
969public available.
970
971You should export `AVOCADO_ALLOW_UNTRUSTED_CODE=1` on the environment in
972order to allow tests which make use of those kind of assets.
973
974AVOCADO_TIMEOUT_EXPECTED
975~~~~~~~~~~~~~~~~~~~~~~~~
976The Avocado framework has a timeout mechanism which interrupts tests to avoid the
977test suite of getting stuck. The timeout value can be set via test parameter or
978property defined in the test class, for further details::
979
980  https://avocado-framework.readthedocs.io/en/latest/guides/writer/chapters/writing.html#setting-a-test-timeout
981
982Even though the timeout can be set by the test developer, there are some tests
983that may not have a well-defined limit of time to finish under certain
984conditions. For example, tests that take longer to execute when QEMU is
985compiled with debug flags. Therefore, the `AVOCADO_TIMEOUT_EXPECTED` variable
986has been used to determine whether those tests should run or not.
987
988GITLAB_CI
989~~~~~~~~~
990A number of tests are flagged to not run on the GitLab CI. Usually because
991they proved to the flaky or there are constraints on the CI environment which
992would make them fail. If you encounter a similar situation then use that
993variable as shown on the code snippet below to skip the test:
994
995.. code::
996
997  @skipIf(os.getenv('GITLAB_CI'), 'Running on GitLab')
998  def test(self):
999      do_something()
1000
1001Uninstalling Avocado
1002--------------------
1003
1004If you've followed the manual installation instructions above, you can
1005easily uninstall Avocado.  Start by listing the packages you have
1006installed::
1007
1008  pip list --user
1009
1010And remove any package you want with::
1011
1012  pip uninstall <package_name>
1013
1014If you've used ``make check-acceptance``, the Python virtual environment where
1015Avocado is installed will be cleaned up as part of ``make check-clean``.
1016
1017.. _checktcg-ref:
1018
1019Testing with "make check-tcg"
1020=============================
1021
1022The check-tcg tests are intended for simple smoke tests of both
1023linux-user and softmmu TCG functionality. However to build test
1024programs for guest targets you need to have cross compilers available.
1025If your distribution supports cross compilers you can do something as
1026simple as::
1027
1028  apt install gcc-aarch64-linux-gnu
1029
1030The configure script will automatically pick up their presence.
1031Sometimes compilers have slightly odd names so the availability of
1032them can be prompted by passing in the appropriate configure option
1033for the architecture in question, for example::
1034
1035  $(configure) --cross-cc-aarch64=aarch64-cc
1036
1037There is also a ``--cross-cc-flags-ARCH`` flag in case additional
1038compiler flags are needed to build for a given target.
1039
1040If you have the ability to run containers as the user the build system
1041will automatically use them where no system compiler is available. For
1042architectures where we also support building QEMU we will generally
1043use the same container to build tests. However there are a number of
1044additional containers defined that have a minimal cross-build
1045environment that is only suitable for building test cases. Sometimes
1046we may use a bleeding edge distribution for compiler features needed
1047for test cases that aren't yet in the LTS distros we support for QEMU
1048itself.
1049
1050See :ref:`container-ref` for more details.
1051
1052Running subset of tests
1053-----------------------
1054
1055You can build the tests for one architecture::
1056
1057  make build-tcg-tests-$TARGET
1058
1059And run with::
1060
1061  make run-tcg-tests-$TARGET
1062
1063Adding ``V=1`` to the invocation will show the details of how to
1064invoke QEMU for the test which is useful for debugging tests.
1065
1066TCG test dependencies
1067---------------------
1068
1069The TCG tests are deliberately very light on dependencies and are
1070either totally bare with minimal gcc lib support (for softmmu tests)
1071or just glibc (for linux-user tests). This is because getting a cross
1072compiler to work with additional libraries can be challenging.
1073
1074Other TCG Tests
1075---------------
1076
1077There are a number of out-of-tree test suites that are used for more
1078extensive testing of processor features.
1079
1080KVM Unit Tests
1081~~~~~~~~~~~~~~
1082
1083The KVM unit tests are designed to run as a Guest OS under KVM but
1084there is no reason why they can't exercise the TCG as well. It
1085provides a minimal OS kernel with hooks for enabling the MMU as well
1086as reporting test results via a special device::
1087
1088  https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
1089
1090Linux Test Project
1091~~~~~~~~~~~~~~~~~~
1092
1093The LTP is focused on exercising the syscall interface of a Linux
1094kernel. It checks that syscalls behave as documented and strives to
1095exercise as many corner cases as possible. It is a useful test suite
1096to run to exercise QEMU's linux-user code::
1097
1098  https://linux-test-project.github.io/
1099