xref: /openbmc/qemu/docs/devel/testing.rst (revision ccb23709)
1===============
2Testing in QEMU
3===============
4
5This document describes the testing infrastructure in QEMU.
6
7Testing with "make check"
8=========================
9
10The "make check" testing family includes most of the C based tests in QEMU. For
11a quick help, run ``make check-help`` from the source tree.
12
13The usual way to run these tests is:
14
15.. code::
16
17  make check
18
19which includes QAPI schema tests, unit tests, QTests and some iotests.
20Different sub-types of "make check" tests will be explained below.
21
22Before running tests, it is best to build QEMU programs first. Some tests
23expect the executables to exist and will fail with obscure messages if they
24cannot find them.
25
26Unit tests
27----------
28
29Unit tests, which can be invoked with ``make check-unit``, are simple C tests
30that typically link to individual QEMU object files and exercise them by
31calling exported functions.
32
33If you are writing new code in QEMU, consider adding a unit test, especially
34for utility modules that are relatively stateless or have few dependencies. To
35add a new unit test:
36
371. Create a new source file. For example, ``tests/foo-test.c``.
38
392. Write the test. Normally you would include the header file which exports
40   the module API, then verify the interface behaves as expected from your
41   test. The test code should be organized with the glib testing framework.
42   Copying and modifying an existing test is usually a good idea.
43
443. Add the test to ``tests/Makefile.include``. First, name the unit test
45   program and add it to ``$(check-unit-y)``; then add a rule to build the
46   executable.  For example:
47
48.. code::
49
50  check-unit-y += tests/foo-test$(EXESUF)
51  tests/foo-test$(EXESUF): tests/foo-test.o $(test-util-obj-y)
52  ...
53
54Since unit tests don't require environment variables, the simplest way to debug
55a unit test failure is often directly invoking it or even running it under
56``gdb``. However there can still be differences in behavior between ``make``
57invocations and your manual run, due to ``$MALLOC_PERTURB_`` environment
58variable (which affects memory reclamation and catches invalid pointers better)
59and gtester options. If necessary, you can run
60
61.. code::
62
63  make check-unit V=1
64
65and copy the actual command line which executes the unit test, then run
66it from the command line.
67
68QTest
69-----
70
71QTest is a device emulation testing framework.  It can be very useful to test
72device models; it could also control certain aspects of QEMU (such as virtual
73clock stepping), with a special purpose "qtest" protocol.  Refer to the
74documentation in ``qtest.c`` for more details of the protocol.
75
76QTest cases can be executed with
77
78.. code::
79
80   make check-qtest
81
82The QTest library is implemented by ``tests/qtest/libqtest.c`` and the API is
83defined in ``tests/qtest/libqtest.h``.
84
85Consider adding a new QTest case when you are introducing a new virtual
86hardware, or extending one if you are adding functionalities to an existing
87virtual device.
88
89On top of libqtest, a higher level library, ``libqos``, was created to
90encapsulate common tasks of device drivers, such as memory management and
91communicating with system buses or devices. Many virtual device tests use
92libqos instead of directly calling into libqtest.
93
94Steps to add a new QTest case are:
95
961. Create a new source file for the test. (More than one file can be added as
97   necessary.) For example, ``tests/qtest/foo-test.c``.
98
992. Write the test code with the glib and libqtest/libqos API. See also existing
100   tests and the library headers for reference.
101
1023. Register the new test in ``tests/qtest/Makefile.include``. Add the test
103   executable name to an appropriate ``check-qtest-*-y`` variable. For example:
104
105   ``check-qtest-generic-y = tests/qtest/foo-test$(EXESUF)``
106
1074. Add object dependencies of the executable in the Makefile, including the
108   test source file(s) and other interesting objects. For example:
109
110   ``tests/qtest/foo-test$(EXESUF): tests/qtest/foo-test.o $(libqos-obj-y)``
111
112Debugging a QTest failure is slightly harder than the unit test because the
113tests look up QEMU program names in the environment variables, such as
114``QTEST_QEMU_BINARY`` and ``QTEST_QEMU_IMG``, and also because it is not easy
115to attach gdb to the QEMU process spawned from the test. But manual invoking
116and using gdb on the test is still simple to do: find out the actual command
117from the output of
118
119.. code::
120
121  make check-qtest V=1
122
123which you can run manually.
124
125QAPI schema tests
126-----------------
127
128The QAPI schema tests validate the QAPI parser used by QMP, by feeding
129predefined input to the parser and comparing the result with the reference
130output.
131
132The input/output data is managed under the ``tests/qapi-schema`` directory.
133Each test case includes four files that have a common base name:
134
135  * ``${casename}.json`` - the file contains the JSON input for feeding the
136    parser
137  * ``${casename}.out`` - the file contains the expected stdout from the parser
138  * ``${casename}.err`` - the file contains the expected stderr from the parser
139  * ``${casename}.exit`` - the expected error code
140
141Consider adding a new QAPI schema test when you are making a change on the QAPI
142parser (either fixing a bug or extending/modifying the syntax). To do this:
143
1441. Add four files for the new case as explained above. For example:
145
146  ``$EDITOR tests/qapi-schema/foo.{json,out,err,exit}``.
147
1482. Add the new test in ``tests/Makefile.include``. For example:
149
150  ``qapi-schema += foo.json``
151
152check-block
153-----------
154
155``make check-block`` runs a subset of the block layer iotests (the tests that
156are in the "auto" group in ``tests/qemu-iotests/group``).
157See the "QEMU iotests" section below for more information.
158
159GCC gcov support
160----------------
161
162``gcov`` is a GCC tool to analyze the testing coverage by
163instrumenting the tested code. To use it, configure QEMU with
164``--enable-gcov`` option and build. Then run ``make check`` as usual.
165
166If you want to gather coverage information on a single test the ``make
167clean-coverage`` target can be used to delete any existing coverage
168information before running a single test.
169
170You can generate a HTML coverage report by executing ``make
171coverage-report`` which will create
172./reports/coverage/coverage-report.html. If you want to create it
173elsewhere simply execute ``make /foo/bar/baz/coverage-report.html``.
174
175Further analysis can be conducted by running the ``gcov`` command
176directly on the various .gcda output files. Please read the ``gcov``
177documentation for more information.
178
179QEMU iotests
180============
181
182QEMU iotests, under the directory ``tests/qemu-iotests``, is the testing
183framework widely used to test block layer related features. It is higher level
184than "make check" tests and 99% of the code is written in bash or Python
185scripts.  The testing success criteria is golden output comparison, and the
186test files are named with numbers.
187
188To run iotests, make sure QEMU is built successfully, then switch to the
189``tests/qemu-iotests`` directory under the build directory, and run ``./check``
190with desired arguments from there.
191
192By default, "raw" format and "file" protocol is used; all tests will be
193executed, except the unsupported ones. You can override the format and protocol
194with arguments:
195
196.. code::
197
198  # test with qcow2 format
199  ./check -qcow2
200  # or test a different protocol
201  ./check -nbd
202
203It's also possible to list test numbers explicitly:
204
205.. code::
206
207  # run selected cases with qcow2 format
208  ./check -qcow2 001 030 153
209
210Cache mode can be selected with the "-c" option, which may help reveal bugs
211that are specific to certain cache mode.
212
213More options are supported by the ``./check`` script, run ``./check -h`` for
214help.
215
216Writing a new test case
217-----------------------
218
219Consider writing a tests case when you are making any changes to the block
220layer. An iotest case is usually the choice for that. There are already many
221test cases, so it is possible that extending one of them may achieve the goal
222and save the boilerplate to create one.  (Unfortunately, there isn't a 100%
223reliable way to find a related one out of hundreds of tests.  One approach is
224using ``git grep``.)
225
226Usually an iotest case consists of two files. One is an executable that
227produces output to stdout and stderr, the other is the expected reference
228output. They are given the same number in file names. E.g. Test script ``055``
229and reference output ``055.out``.
230
231In rare cases, when outputs differ between cache mode ``none`` and others, a
232``.out.nocache`` file is added. In other cases, when outputs differ between
233image formats, more than one ``.out`` files are created ending with the
234respective format names, e.g. ``178.out.qcow2`` and ``178.out.raw``.
235
236There isn't a hard rule about how to write a test script, but a new test is
237usually a (copy and) modification of an existing case.  There are a few
238commonly used ways to create a test:
239
240* A Bash script. It will make use of several environmental variables related
241  to the testing procedure, and could source a group of ``common.*`` libraries
242  for some common helper routines.
243
244* A Python unittest script. Import ``iotests`` and create a subclass of
245  ``iotests.QMPTestCase``, then call ``iotests.main`` method. The downside of
246  this approach is that the output is too scarce, and the script is considered
247  harder to debug.
248
249* A simple Python script without using unittest module. This could also import
250  ``iotests`` for launching QEMU and utilities etc, but it doesn't inherit
251  from ``iotests.QMPTestCase`` therefore doesn't use the Python unittest
252  execution. This is a combination of 1 and 2.
253
254Pick the language per your preference since both Bash and Python have
255comparable library support for invoking and interacting with QEMU programs. If
256you opt for Python, it is strongly recommended to write Python 3 compatible
257code.
258
259Both Python and Bash frameworks in iotests provide helpers to manage test
260images. They can be used to create and clean up images under the test
261directory. If no I/O or any protocol specific feature is needed, it is often
262more convenient to use the pseudo block driver, ``null-co://``, as the test
263image, which doesn't require image creation or cleaning up. Avoid system-wide
264devices or files whenever possible, such as ``/dev/null`` or ``/dev/zero``.
265Otherwise, image locking implications have to be considered.  For example,
266another application on the host may have locked the file, possibly leading to a
267test failure.  If using such devices are explicitly desired, consider adding
268``locking=off`` option to disable image locking.
269
270.. _docker-ref:
271
272Docker based tests
273==================
274
275Introduction
276------------
277
278The Docker testing framework in QEMU utilizes public Docker images to build and
279test QEMU in predefined and widely accessible Linux environments.  This makes
280it possible to expand the test coverage across distros, toolchain flavors and
281library versions.
282
283Prerequisites
284-------------
285
286Install "docker" with the system package manager and start the Docker service
287on your development machine, then make sure you have the privilege to run
288Docker commands. Typically it means setting up passwordless ``sudo docker``
289command or login as root. For example:
290
291.. code::
292
293  $ sudo yum install docker
294  $ # or `apt-get install docker` for Ubuntu, etc.
295  $ sudo systemctl start docker
296  $ sudo docker ps
297
298The last command should print an empty table, to verify the system is ready.
299
300An alternative method to set up permissions is by adding the current user to
301"docker" group and making the docker daemon socket file (by default
302``/var/run/docker.sock``) accessible to the group:
303
304.. code::
305
306  $ sudo groupadd docker
307  $ sudo usermod $USER -a -G docker
308  $ sudo chown :docker /var/run/docker.sock
309
310Note that any one of above configurations makes it possible for the user to
311exploit the whole host with Docker bind mounting or other privileged
312operations.  So only do it on development machines.
313
314Quickstart
315----------
316
317From source tree, type ``make docker`` to see the help. Testing can be started
318without configuring or building QEMU (``configure`` and ``make`` are done in
319the container, with parameters defined by the make target):
320
321.. code::
322
323  make docker-test-build@min-glib
324
325This will create a container instance using the ``min-glib`` image (the image
326is downloaded and initialized automatically), in which the ``test-build`` job
327is executed.
328
329Images
330------
331
332Along with many other images, the ``min-glib`` image is defined in a Dockerfile
333in ``tests/docker/dockerfiles/``, called ``min-glib.docker``. ``make docker``
334command will list all the available images.
335
336To add a new image, simply create a new ``.docker`` file under the
337``tests/docker/dockerfiles/`` directory.
338
339A ``.pre`` script can be added beside the ``.docker`` file, which will be
340executed before building the image under the build context directory. This is
341mainly used to do necessary host side setup. One such setup is ``binfmt_misc``,
342for example, to make qemu-user powered cross build containers work.
343
344Tests
345-----
346
347Different tests are added to cover various configurations to build and test
348QEMU.  Docker tests are the executables under ``tests/docker`` named
349``test-*``. They are typically shell scripts and are built on top of a shell
350library, ``tests/docker/common.rc``, which provides helpers to find the QEMU
351source and build it.
352
353The full list of tests is printed in the ``make docker`` help.
354
355Tools
356-----
357
358There are executables that are created to run in a specific Docker environment.
359This makes it easy to write scripts that have heavy or special dependencies,
360but are still very easy to use.
361
362Currently the only tool is ``travis``, which mimics the Travis-CI tests in a
363container. It runs in the ``travis`` image:
364
365.. code::
366
367  make docker-travis@travis
368
369Debugging a Docker test failure
370-------------------------------
371
372When CI tasks, maintainers or yourself report a Docker test failure, follow the
373below steps to debug it:
374
3751. Locally reproduce the failure with the reported command line. E.g. run
376   ``make docker-test-mingw@fedora J=8``.
3772. Add "V=1" to the command line, try again, to see the verbose output.
3783. Further add "DEBUG=1" to the command line. This will pause in a shell prompt
379   in the container right before testing starts. You could either manually
380   build QEMU and run tests from there, or press Ctrl-D to let the Docker
381   testing continue.
3824. If you press Ctrl-D, the same building and testing procedure will begin, and
383   will hopefully run into the error again. After that, you will be dropped to
384   the prompt for debug.
385
386Options
387-------
388
389Various options can be used to affect how Docker tests are done. The full
390list is in the ``make docker`` help text. The frequently used ones are:
391
392* ``V=1``: the same as in top level ``make``. It will be propagated to the
393  container and enable verbose output.
394* ``J=$N``: the number of parallel tasks in make commands in the container,
395  similar to the ``-j $N`` option in top level ``make``. (The ``-j`` option in
396  top level ``make`` will not be propagated into the container.)
397* ``DEBUG=1``: enables debug. See the previous "Debugging a Docker test
398  failure" section.
399
400Thread Sanitizer
401================
402
403Thread Sanitizer (TSan) is a tool which can detect data races.  QEMU supports
404building and testing with this tool.
405
406For more information on TSan:
407
408https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual
409
410Thread Sanitizer in Docker
411---------------------------
412TSan is currently supported in the ubuntu2004 docker.
413
414The test-tsan test will build using TSan and then run make check.
415
416.. code::
417
418  make docker-test-tsan@ubuntu2004
419
420TSan warnings under docker are placed in files located at build/tsan/.
421
422We recommend using DEBUG=1 to allow launching the test from inside the docker,
423and to allow review of the warnings generated by TSan.
424
425Building and Testing with TSan
426------------------------------
427
428It is possible to build and test with TSan, with a few additional steps.
429These steps are normally done automatically in the docker.
430
431There is a one time patch needed in clang-9 or clang-10 at this time:
432
433.. code::
434
435  sed -i 's/^const/static const/g' \
436      /usr/lib/llvm-10/lib/clang/10.0.0/include/sanitizer/tsan_interface.h
437
438To configure the build for TSan:
439
440.. code::
441
442  ../configure --enable-tsan --cc=clang-10 --cxx=clang++-10 \
443               --disable-werror --extra-cflags="-O0"
444
445The runtime behavior of TSAN is controlled by the TSAN_OPTIONS environment
446variable.
447
448More information on the TSAN_OPTIONS can be found here:
449
450https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
451
452For example:
453
454.. code::
455
456  export TSAN_OPTIONS=suppressions=<path to qemu>/tests/tsan/suppressions.tsan \
457                      detect_deadlocks=false history_size=7 exitcode=0 \
458                      log_path=<build path>/tsan/tsan_warning
459
460The above exitcode=0 has TSan continue without error if any warnings are found.
461This allows for running the test and then checking the warnings afterwards.
462If you want TSan to stop and exit with error on warnings, use exitcode=66.
463
464TSan Suppressions
465-----------------
466Keep in mind that for any data race warning, although there might be a data race
467detected by TSan, there might be no actual bug here.  TSan provides several
468different mechanisms for suppressing warnings.  In general it is recommended
469to fix the code if possible to eliminate the data race rather than suppress
470the warning.
471
472A few important files for suppressing warnings are:
473
474tests/tsan/suppressions.tsan - Has TSan warnings we wish to suppress at runtime.
475The comment on each supression will typically indicate why we are
476suppressing it.  More information on the file format can be found here:
477
478https://github.com/google/sanitizers/wiki/ThreadSanitizerSuppressions
479
480tests/tsan/blacklist.tsan - Has TSan warnings we wish to disable
481at compile time for test or debug.
482Add flags to configure to enable:
483
484"--extra-cflags=-fsanitize-blacklist=<src path>/tests/tsan/blacklist.tsan"
485
486More information on the file format can be found here under "Blacklist Format":
487
488https://github.com/google/sanitizers/wiki/ThreadSanitizerFlags
489
490TSan Annotations
491----------------
492include/qemu/tsan.h defines annotations.  See this file for more descriptions
493of the annotations themselves.  Annotations can be used to suppress
494TSan warnings or give TSan more information so that it can detect proper
495relationships between accesses of data.
496
497Annotation examples can be found here:
498
499https://github.com/llvm/llvm-project/tree/master/compiler-rt/test/tsan/
500
501Good files to start with are: annotate_happens_before.cpp and ignore_race.cpp
502
503The full set of annotations can be found here:
504
505https://github.com/llvm/llvm-project/blob/master/compiler-rt/lib/tsan/rtl/tsan_interface_ann.cpp
506
507VM testing
508==========
509
510This test suite contains scripts that bootstrap various guest images that have
511necessary packages to build QEMU. The basic usage is documented in ``Makefile``
512help which is displayed with ``make vm-help``.
513
514Quickstart
515----------
516
517Run ``make vm-help`` to list available make targets. Invoke a specific make
518command to run build test in an image. For example, ``make vm-build-freebsd``
519will build the source tree in the FreeBSD image. The command can be executed
520from either the source tree or the build dir; if the former, ``./configure`` is
521not needed. The command will then generate the test image in ``./tests/vm/``
522under the working directory.
523
524Note: images created by the scripts accept a well-known RSA key pair for SSH
525access, so they SHOULD NOT be exposed to external interfaces if you are
526concerned about attackers taking control of the guest and potentially
527exploiting a QEMU security bug to compromise the host.
528
529QEMU binaries
530-------------
531
532By default, qemu-system-x86_64 is searched in $PATH to run the guest. If there
533isn't one, or if it is older than 2.10, the test won't work. In this case,
534provide the QEMU binary in env var: ``QEMU=/path/to/qemu-2.10+``.
535
536Likewise the path to qemu-img can be set in QEMU_IMG environment variable.
537
538Make jobs
539---------
540
541The ``-j$X`` option in the make command line is not propagated into the VM,
542specify ``J=$X`` to control the make jobs in the guest.
543
544Debugging
545---------
546
547Add ``DEBUG=1`` and/or ``V=1`` to the make command to allow interactive
548debugging and verbose output. If this is not enough, see the next section.
549``V=1`` will be propagated down into the make jobs in the guest.
550
551Manual invocation
552-----------------
553
554Each guest script is an executable script with the same command line options.
555For example to work with the netbsd guest, use ``$QEMU_SRC/tests/vm/netbsd``:
556
557.. code::
558
559    $ cd $QEMU_SRC/tests/vm
560
561    # To bootstrap the image
562    $ ./netbsd --build-image --image /var/tmp/netbsd.img
563    <...>
564
565    # To run an arbitrary command in guest (the output will not be echoed unless
566    # --debug is added)
567    $ ./netbsd --debug --image /var/tmp/netbsd.img uname -a
568
569    # To build QEMU in guest
570    $ ./netbsd --debug --image /var/tmp/netbsd.img --build-qemu $QEMU_SRC
571
572    # To get to an interactive shell
573    $ ./netbsd --interactive --image /var/tmp/netbsd.img sh
574
575Adding new guests
576-----------------
577
578Please look at existing guest scripts for how to add new guests.
579
580Most importantly, create a subclass of BaseVM and implement ``build_image()``
581method and define ``BUILD_SCRIPT``, then finally call ``basevm.main()`` from
582the script's ``main()``.
583
584* Usually in ``build_image()``, a template image is downloaded from a
585  predefined URL. ``BaseVM._download_with_cache()`` takes care of the cache and
586  the checksum, so consider using it.
587
588* Once the image is downloaded, users, SSH server and QEMU build deps should
589  be set up:
590
591  - Root password set to ``BaseVM.ROOT_PASS``
592  - User ``BaseVM.GUEST_USER`` is created, and password set to
593    ``BaseVM.GUEST_PASS``
594  - SSH service is enabled and started on boot,
595    ``$QEMU_SRC/tests/keys/id_rsa.pub`` is added to ssh's ``authorized_keys``
596    file of both root and the normal user
597  - DHCP client service is enabled and started on boot, so that it can
598    automatically configure the virtio-net-pci NIC and communicate with QEMU
599    user net (10.0.2.2)
600  - Necessary packages are installed to untar the source tarball and build
601    QEMU
602
603* Write a proper ``BUILD_SCRIPT`` template, which should be a shell script that
604  untars a raw virtio-blk block device, which is the tarball data blob of the
605  QEMU source tree, then configure/build it. Running "make check" is also
606  recommended.
607
608Image fuzzer testing
609====================
610
611An image fuzzer was added to exercise format drivers. Currently only qcow2 is
612supported. To start the fuzzer, run
613
614.. code::
615
616  tests/image-fuzzer/runner.py -c '[["qemu-img", "info", "$test_img"]]' /tmp/test qcow2
617
618Alternatively, some command different from "qemu-img info" can be tested, by
619changing the ``-c`` option.
620
621Acceptance tests using the Avocado Framework
622============================================
623
624The ``tests/acceptance`` directory hosts functional tests, also known
625as acceptance level tests.  They're usually higher level tests, and
626may interact with external resources and with various guest operating
627systems.
628
629These tests are written using the Avocado Testing Framework (which must
630be installed separately) in conjunction with a the ``avocado_qemu.Test``
631class, implemented at ``tests/acceptance/avocado_qemu``.
632
633Tests based on ``avocado_qemu.Test`` can easily:
634
635 * Customize the command line arguments given to the convenience
636   ``self.vm`` attribute (a QEMUMachine instance)
637
638 * Interact with the QEMU monitor, send QMP commands and check
639   their results
640
641 * Interact with the guest OS, using the convenience console device
642   (which may be useful to assert the effectiveness and correctness of
643   command line arguments or QMP commands)
644
645 * Interact with external data files that accompany the test itself
646   (see ``self.get_data()``)
647
648 * Download (and cache) remote data files, such as firmware and kernel
649   images
650
651 * Have access to a library of guest OS images (by means of the
652   ``avocado.utils.vmimage`` library)
653
654 * Make use of various other test related utilities available at the
655   test class itself and at the utility library:
656
657   - http://avocado-framework.readthedocs.io/en/latest/api/test/avocado.html#avocado.Test
658   - http://avocado-framework.readthedocs.io/en/latest/api/utils/avocado.utils.html
659
660Running tests
661-------------
662
663You can run the acceptance tests simply by executing:
664
665.. code::
666
667  make check-acceptance
668
669This involves the automatic creation of Python virtual environment
670within the build tree (at ``tests/venv``) which will have all the
671right dependencies, and will save tests results also within the
672build tree (at ``tests/results``).
673
674Note: the build environment must be using a Python 3 stack, and have
675the ``venv`` and ``pip`` packages installed.  If necessary, make sure
676``configure`` is called with ``--python=`` and that those modules are
677available.  On Debian and Ubuntu based systems, depending on the
678specific version, they may be on packages named ``python3-venv`` and
679``python3-pip``.
680
681The scripts installed inside the virtual environment may be used
682without an "activation".  For instance, the Avocado test runner
683may be invoked by running:
684
685 .. code::
686
687  tests/venv/bin/avocado run $OPTION1 $OPTION2 tests/acceptance/
688
689Manual Installation
690-------------------
691
692To manually install Avocado and its dependencies, run:
693
694.. code::
695
696  pip install --user avocado-framework
697
698Alternatively, follow the instructions on this link:
699
700  http://avocado-framework.readthedocs.io/en/latest/GetStartedGuide.html#installing-avocado
701
702Overview
703--------
704
705The ``tests/acceptance/avocado_qemu`` directory provides the
706``avocado_qemu`` Python module, containing the ``avocado_qemu.Test``
707class.  Here's a simple usage example:
708
709.. code::
710
711  from avocado_qemu import Test
712
713
714  class Version(Test):
715      """
716      :avocado: tags=quick
717      """
718      def test_qmp_human_info_version(self):
719          self.vm.launch()
720          res = self.vm.command('human-monitor-command',
721                                command_line='info version')
722          self.assertRegexpMatches(res, r'^(\d+\.\d+\.\d)')
723
724To execute your test, run:
725
726.. code::
727
728  avocado run version.py
729
730Tests may be classified according to a convention by using docstring
731directives such as ``:avocado: tags=TAG1,TAG2``.  To run all tests
732in the current directory, tagged as "quick", run:
733
734.. code::
735
736  avocado run -t quick .
737
738The ``avocado_qemu.Test`` base test class
739-----------------------------------------
740
741The ``avocado_qemu.Test`` class has a number of characteristics that
742are worth being mentioned right away.
743
744First of all, it attempts to give each test a ready to use QEMUMachine
745instance, available at ``self.vm``.  Because many tests will tweak the
746QEMU command line, launching the QEMUMachine (by using ``self.vm.launch()``)
747is left to the test writer.
748
749The base test class has also support for tests with more than one
750QEMUMachine. The way to get machines is through the ``self.get_vm()``
751method which will return a QEMUMachine instance. The ``self.get_vm()``
752method accepts arguments that will be passed to the QEMUMachine creation
753and also an optional `name` attribute so you can identify a specific
754machine and get it more than once through the tests methods. A simple
755and hypothetical example follows:
756
757.. code::
758
759  from avocado_qemu import Test
760
761
762  class MultipleMachines(Test):
763      """
764      :avocado: enable
765      """
766      def test_multiple_machines(self):
767          first_machine = self.get_vm()
768          second_machine = self.get_vm()
769          self.get_vm(name='third_machine').launch()
770
771          first_machine.launch()
772          second_machine.launch()
773
774          first_res = first_machine.command(
775              'human-monitor-command',
776              command_line='info version')
777
778          second_res = second_machine.command(
779              'human-monitor-command',
780              command_line='info version')
781
782          third_res = self.get_vm(name='third_machine').command(
783              'human-monitor-command',
784              command_line='info version')
785
786          self.assertEquals(first_res, second_res, third_res)
787
788At test "tear down", ``avocado_qemu.Test`` handles all the QEMUMachines
789shutdown.
790
791QEMUMachine
792~~~~~~~~~~~
793
794The QEMUMachine API is already widely used in the Python iotests,
795device-crash-test and other Python scripts.  It's a wrapper around the
796execution of a QEMU binary, giving its users:
797
798 * the ability to set command line arguments to be given to the QEMU
799   binary
800
801 * a ready to use QMP connection and interface, which can be used to
802   send commands and inspect its results, as well as asynchronous
803   events
804
805 * convenience methods to set commonly used command line arguments in
806   a more succinct and intuitive way
807
808QEMU binary selection
809~~~~~~~~~~~~~~~~~~~~~
810
811The QEMU binary used for the ``self.vm`` QEMUMachine instance will
812primarily depend on the value of the ``qemu_bin`` parameter.  If it's
813not explicitly set, its default value will be the result of a dynamic
814probe in the same source tree.  A suitable binary will be one that
815targets the architecture matching host machine.
816
817Based on this description, test writers will usually rely on one of
818the following approaches:
819
8201) Set ``qemu_bin``, and use the given binary
821
8222) Do not set ``qemu_bin``, and use a QEMU binary named like
823   "${arch}-softmmu/qemu-system-${arch}", either in the current
824   working directory, or in the current source tree.
825
826The resulting ``qemu_bin`` value will be preserved in the
827``avocado_qemu.Test`` as an attribute with the same name.
828
829Attribute reference
830-------------------
831
832Besides the attributes and methods that are part of the base
833``avocado.Test`` class, the following attributes are available on any
834``avocado_qemu.Test`` instance.
835
836vm
837~~
838
839A QEMUMachine instance, initially configured according to the given
840``qemu_bin`` parameter.
841
842arch
843~~~~
844
845The architecture can be used on different levels of the stack, e.g. by
846the framework or by the test itself.  At the framework level, it will
847currently influence the selection of a QEMU binary (when one is not
848explicitly given).
849
850Tests are also free to use this attribute value, for their own needs.
851A test may, for instance, use the same value when selecting the
852architecture of a kernel or disk image to boot a VM with.
853
854The ``arch`` attribute will be set to the test parameter of the same
855name.  If one is not given explicitly, it will either be set to
856``None``, or, if the test is tagged with one (and only one)
857``:avocado: tags=arch:VALUE`` tag, it will be set to ``VALUE``.
858
859machine
860~~~~~~~
861
862The machine type that will be set to all QEMUMachine instances created
863by the test.
864
865The ``machine`` attribute will be set to the test parameter of the same
866name.  If one is not given explicitly, it will either be set to
867``None``, or, if the test is tagged with one (and only one)
868``:avocado: tags=machine:VALUE`` tag, it will be set to ``VALUE``.
869
870qemu_bin
871~~~~~~~~
872
873The preserved value of the ``qemu_bin`` parameter or the result of the
874dynamic probe for a QEMU binary in the current working directory or
875source tree.
876
877Parameter reference
878-------------------
879
880To understand how Avocado parameters are accessed by tests, and how
881they can be passed to tests, please refer to::
882
883  http://avocado-framework.readthedocs.io/en/latest/WritingTests.html#accessing-test-parameters
884
885Parameter values can be easily seen in the log files, and will look
886like the following:
887
888.. code::
889
890  PARAMS (key=qemu_bin, path=*, default=x86_64-softmmu/qemu-system-x86_64) => 'x86_64-softmmu/qemu-system-x86_64
891
892arch
893~~~~
894
895The architecture that will influence the selection of a QEMU binary
896(when one is not explicitly given).
897
898Tests are also free to use this parameter value, for their own needs.
899A test may, for instance, use the same value when selecting the
900architecture of a kernel or disk image to boot a VM with.
901
902This parameter has a direct relation with the ``arch`` attribute.  If
903not given, it will default to None.
904
905machine
906~~~~~~~
907
908The machine type that will be set to all QEMUMachine instances created
909by the test.
910
911
912qemu_bin
913~~~~~~~~
914
915The exact QEMU binary to be used on QEMUMachine.
916
917Uninstalling Avocado
918--------------------
919
920If you've followed the manual installation instructions above, you can
921easily uninstall Avocado.  Start by listing the packages you have
922installed::
923
924  pip list --user
925
926And remove any package you want with::
927
928  pip uninstall <package_name>
929
930If you've used ``make check-acceptance``, the Python virtual environment where
931Avocado is installed will be cleaned up as part of ``make check-clean``.
932
933Testing with "make check-tcg"
934=============================
935
936The check-tcg tests are intended for simple smoke tests of both
937linux-user and softmmu TCG functionality. However to build test
938programs for guest targets you need to have cross compilers available.
939If your distribution supports cross compilers you can do something as
940simple as::
941
942  apt install gcc-aarch64-linux-gnu
943
944The configure script will automatically pick up their presence.
945Sometimes compilers have slightly odd names so the availability of
946them can be prompted by passing in the appropriate configure option
947for the architecture in question, for example::
948
949  $(configure) --cross-cc-aarch64=aarch64-cc
950
951There is also a ``--cross-cc-flags-ARCH`` flag in case additional
952compiler flags are needed to build for a given target.
953
954If you have the ability to run containers as the user you can also
955take advantage of the build systems "Docker" support. It will then use
956containers to build any test case for an enabled guest where there is
957no system compiler available. See :ref: `_docker-ref` for details.
958
959Running subset of tests
960-----------------------
961
962You can build the tests for one architecture::
963
964  make build-tcg-tests-$TARGET
965
966And run with::
967
968  make run-tcg-tests-$TARGET
969
970Adding ``V=1`` to the invocation will show the details of how to
971invoke QEMU for the test which is useful for debugging tests.
972
973TCG test dependencies
974---------------------
975
976The TCG tests are deliberately very light on dependencies and are
977either totally bare with minimal gcc lib support (for softmmu tests)
978or just glibc (for linux-user tests). This is because getting a cross
979compiler to work with additional libraries can be challenging.
980
981Other TCG Tests
982---------------
983
984There are a number of out-of-tree test suites that are used for more
985extensive testing of processor features.
986
987KVM Unit Tests
988~~~~~~~~~~~~~~
989
990The KVM unit tests are designed to run as a Guest OS under KVM but
991there is no reason why they can't exercise the TCG as well. It
992provides a minimal OS kernel with hooks for enabling the MMU as well
993as reporting test results via a special device::
994
995  https://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
996
997Linux Test Project
998~~~~~~~~~~~~~~~~~~
999
1000The LTP is focused on exercising the syscall interface of a Linux
1001kernel. It checks that syscalls behave as documented and strives to
1002exercise as many corner cases as possible. It is a useful test suite
1003to run to exercise QEMU's linux-user code::
1004
1005  https://linux-test-project.github.io/
1006