How to set up a (rh)el8 machine for running OST

Hi All, there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy. Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1]. This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained. Regards, Marcin [1] https://gerrit.ovirt.org/#/c/111749/

On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained.
Awesome, thanks!

On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained.
Awesome, thanks!
So setup_for_ost.sh finished successfully (after more than an hour), but now I see conflicting documentation and comments about how to run test suites and how to cleanup after the run. The docs say: https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/in... ./run_suite.sh basic-suite-4.0 But I see other undocumented ways in recent threads: run_tests Not sure which tests will run, and: run_tc basic-suite-master/test-scenarios/001_initialize_engine.py Which seems to run only one test module. This seems useful but for one module I found this undocumented command: python -B -m pytest -s -v -x --junit-xml=test.xml ${SUITE}/test-scenarios/name_test_pytest.py This looks most promising, assuming that I can use -k test_name or -m marker to select only some tests for quick feedback. However due to the way OST is built, mixing setup and test code, when later tests depend on earlier setup "tests" I don't see how this is going to work with current suites. What is the difference between the ways, and which one is the right way? My plan is to add a new storage suite that will run after some basic setup was done - engine, hosts, and storage are ready. Which tests scenarios are needed to reach this state? Do we have any documentation on how to add a new suite? or my only reference is the network suite? Nir

On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained.
Awesome, thanks!
So setup_for_ost.sh finished successfully (after more than an hour), but now I see conflicting documentation and comments about how to run test suites and how to cleanup after the run.
The docs say: https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/in...
./run_suite.sh basic-suite-4.0
But I see other undocumented ways in recent threads:
run_tests
Trying the run_test option, from recent Mail:
. lagofy.sh lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa
This fails: $ . lagofy.sh Suite basic-suite-master - lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Add your group to qemu's group: "usermod -a -G qemu nsoffer" setup_for_ost.sh should handle this, no? [nsoffer@ost ovirt-system-tests]$ lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Using images ost-images-el8-host-installed-1-202011021248.x86_64, ost-images-el8-engine-installed-1-202011021248.x86_64 containing ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch vdsm-4.40.35.1-1.el8.x86_64 @ Initialize and populate prefix: # Initialize prefix: * Create prefix dirs: * Create prefix dirs: Success (in 0:00:00) * Generate prefix uuid: * Generate prefix uuid: Success (in 0:00:00) * Copying ssh key: * Copying ssh key: Success (in 0:00:00) * Tag prefix as initialized: * Tag prefix as initialized: Success (in 0:00:00) # Initialize prefix: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: * Create disk root: * Create disk root: Success (in 0:00:00) * Create disk nfs: * Create disk nfs: Success (in 0:00:00) * Create disk iscsi: * Create disk iscsi: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00) # Copying any deploy scripts: # Copying any deploy scripts: Success (in 0:00:00) # calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. # Missing current link, setting it to default @ Initialize and populate prefix: ERROR (in 0:00:01) Error occured, aborting Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main cli_plugins[args.verb].do_run(args) File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line 186, in do_run self._do_run(**vars(args)) File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init ssh_key=ssh_key, File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143, in virt_conf_from_stream ssh_key=ssh_key File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269, in virt_conf net_specs=conf['nets'], File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__ self._nets[name] = self._create_net(spec, compat) File "/usr/lib/python3.6/site-packages/lago/virt.py", line 113, in _create_net return cls(self, net_spec, compat=compat) File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/network.py", line 44, in __init__ name=env.uuid, File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/utils.py", line 96, in get_libvirt_connection return libvirt.openAuth(libvirt_url, auth) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 104, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirt.libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory $ systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:libvirtd(8) https://libvirt.org [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd.socket ● libvirtd.socket - Libvirt local socket Loaded: loaded (/usr/lib/systemd/system/libvirtd.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-ro.socket ● libvirtd-ro.socket - Libvirt local read-only socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-ro.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock-ro (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-admin.socket ● libvirtd-admin.socket - Libvirt admin socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-admin.socket; disabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-admin-sock (Stream) Another missing setup in setup_for_ost.sh? After adding myself to qemu group and starting libvirtd sockets lago_init seems to work.
time run_tests
Not sure which tests will run, and:
run_tc basic-suite-master/test-scenarios/001_initialize_engine.py
Which seems to run only one test module. This seems useful but for one module I found this undocumented command:
python -B -m pytest -s -v -x --junit-xml=test.xml ${SUITE}/test-scenarios/name_test_pytest.py
This looks most promising, assuming that I can use -k test_name or -m marker to select only some tests for quick feedback. However due to the way OST is built, mixing setup and test code, when later tests depend on earlier setup "tests" I don't see how this is going to work with current suites.
What is the difference between the ways, and which one is the right way?
My plan is to add a new storage suite that will run after some basic setup was done - engine, hosts, and storage are ready. Which tests scenarios are needed to reach this state?
Do we have any documentation on how to add a new suite? or my only reference is the network suite?
Nir

On 11/3/20 7:21 PM, Nir Soffer wrote:
On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained. Awesome, thanks! So setup_for_ost.sh finished successfully (after more than an hour), but now I see conflicting documentation and comments about how to run test suites and how to cleanup after the run.
The docs say: https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/in...
./run_suite.sh basic-suite-4.0
But I see other undocumented ways in recent threads:
run_tests Trying the run_test option, from recent Mail:
. lagofy.sh lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa This fails:
$ . lagofy.sh Suite basic-suite-master - lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Add your group to qemu's group: "usermod -a -G qemu nsoffer"
setup_for_ost.sh should handle this, no? It does: https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b... Maybe you didn't relog so the group inclusion would be effective? But I agree there should be a message printed to the user if relogging is necessary - I will write a patch for it.
[nsoffer@ost ovirt-system-tests]$ lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Using images ost-images-el8-host-installed-1-202011021248.x86_64, ost-images-el8-engine-installed-1-202011021248.x86_64 containing ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch vdsm-4.40.35.1-1.el8.x86_64 @ Initialize and populate prefix: # Initialize prefix: * Create prefix dirs: * Create prefix dirs: Success (in 0:00:00) * Generate prefix uuid: * Generate prefix uuid: Success (in 0:00:00) * Copying ssh key: * Copying ssh key: Success (in 0:00:00) * Tag prefix as initialized: * Tag prefix as initialized: Success (in 0:00:00) # Initialize prefix: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: * Create disk root: * Create disk root: Success (in 0:00:00) * Create disk nfs: * Create disk nfs: Success (in 0:00:00) * Create disk iscsi: * Create disk iscsi: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00) # Copying any deploy scripts: # Copying any deploy scripts: Success (in 0:00:00) # calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. # Missing current link, setting it to default @ Initialize and populate prefix: ERROR (in 0:00:01) Error occured, aborting Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main cli_plugins[args.verb].do_run(args) File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line 186, in do_run self._do_run(**vars(args)) File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init ssh_key=ssh_key, File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143, in virt_conf_from_stream ssh_key=ssh_key File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269, in virt_conf net_specs=conf['nets'], File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__ self._nets[name] = self._create_net(spec, compat) File "/usr/lib/python3.6/site-packages/lago/virt.py", line 113, in _create_net return cls(self, net_spec, compat=compat) File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/network.py", line 44, in __init__ name=env.uuid, File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/utils.py", line 96, in get_libvirt_connection return libvirt.openAuth(libvirt_url, auth) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 104, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirt.libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
$ systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:libvirtd(8) https://libvirt.org [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd.socket ● libvirtd.socket - Libvirt local socket Loaded: loaded (/usr/lib/systemd/system/libvirtd.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-ro.socket ● libvirtd-ro.socket - Libvirt local read-only socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-ro.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock-ro (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-admin.socket ● libvirtd-admin.socket - Libvirt admin socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-admin.socket; disabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-admin-sock (Stream)
Another missing setup in setup_for_ost.sh?
Never encountered it myself, but I can also try enabling the 'libvirtd.socket' service in the playbook.
After adding myself to qemu group and starting libvirtd sockets lago_init seems to work.
time run_tests Not sure which tests will run, and:
run_tc basic-suite-master/test-scenarios/001_initialize_engine.py
Which seems to run only one test module. There's also 'run_tests', which runs all the tests for you.
This seems useful but for one module I found this undocumented command:
python -B -m pytest -s -v -x --junit-xml=test.xml ${SUITE}/test-scenarios/name_test_pytest.py
This looks most promising, assuming that I can use -k test_name or -m marker to select only some tests for quick feedback. However due to the way OST is built, mixing setup and test code, when later tests depend on earlier setup "tests" I don't see how this is going to work with current suites. That is true - the way OST works is that tests are dependent on previous tests. It's not nice, but this is how OST worked from the beginning - it's not feasible to set up a whole oVirt solution for each test case. Changing that requires a complete redesign of the project.
What is the difference between the ways, and which one is the right way?
'run_suite.sh' is used by CI and it brings all the legacy and the burden
My plan is to add a new storage suite that will run after some basic setup was done - engine, hosts, and storage are ready. Which tests scenarios are needed to reach this state?
Probably up until '002_bootstrap' finishes, but you have to find a point which satisfies you by yourself. If you think that there's too much stuff in
'run_tc' is still useful though. You can do 'run_tests' to run OST up to some point, i.e. force a failure with 'assert False' and work on your new test case with the whole setup hanging at some stage. There is no way currently to freeze and restore the the state of the environment though, so if you apply some intrusive changes you may need to rerun the suite from the beginning. possible (el7/py2/keeping all the suites happy etc.). 'lagofy.sh' OTOH was written by Michal recently and it's more lightweight and user friendly. There is no right way - work with the one that suits you better. that module, please make a suggestion to split it into separate ones.
Do we have any documentation on how to add a new suite? or my only reference is the network suite?
Unfortunately not, but feel free to reach me if you have any questions.
Nir

On Wed, Nov 4, 2020 at 12:18 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
On 11/3/20 7:21 PM, Nir Soffer wrote:
On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained. Awesome, thanks! So setup_for_ost.sh finished successfully (after more than an hour), but now I see conflicting documentation and comments about how to run test suites and how to cleanup after the run.
The docs say: https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/in...
./run_suite.sh basic-suite-4.0
But I see other undocumented ways in recent threads:
run_tests Trying the run_test option, from recent Mail:
. lagofy.sh lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa This fails:
$ . lagofy.sh Suite basic-suite-master - lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Add your group to qemu's group: "usermod -a -G qemu nsoffer"
setup_for_ost.sh should handle this, no? It does: https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b... Maybe you didn't relog so the group inclusion would be effective? But I agree there should be a message printed to the user if relogging is necessary - I will write a patch for it.
[nsoffer@ost ovirt-system-tests]$ lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Using images ost-images-el8-host-installed-1-202011021248.x86_64, ost-images-el8-engine-installed-1-202011021248.x86_64 containing ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch vdsm-4.40.35.1-1.el8.x86_64 @ Initialize and populate prefix: # Initialize prefix: * Create prefix dirs: * Create prefix dirs: Success (in 0:00:00) * Generate prefix uuid: * Generate prefix uuid: Success (in 0:00:00) * Copying ssh key: * Copying ssh key: Success (in 0:00:00) * Tag prefix as initialized: * Tag prefix as initialized: Success (in 0:00:00) # Initialize prefix: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: * Create disk root: * Create disk root: Success (in 0:00:00) * Create disk nfs: * Create disk nfs: Success (in 0:00:00) * Create disk iscsi: * Create disk iscsi: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00) # Copying any deploy scripts: # Copying any deploy scripts: Success (in 0:00:00) # calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. # Missing current link, setting it to default @ Initialize and populate prefix: ERROR (in 0:00:01) Error occured, aborting Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main cli_plugins[args.verb].do_run(args) File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line 186, in do_run self._do_run(**vars(args)) File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init ssh_key=ssh_key, File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143, in virt_conf_from_stream ssh_key=ssh_key File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269, in virt_conf net_specs=conf['nets'], File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__ self._nets[name] = self._create_net(spec, compat) File "/usr/lib/python3.6/site-packages/lago/virt.py", line 113, in _create_net return cls(self, net_spec, compat=compat) File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/network.py", line 44, in __init__ name=env.uuid, File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/utils.py", line 96, in get_libvirt_connection return libvirt.openAuth(libvirt_url, auth) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 104, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirt.libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
$ systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:libvirtd(8) https://libvirt.org [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd.socket ● libvirtd.socket - Libvirt local socket Loaded: loaded (/usr/lib/systemd/system/libvirtd.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-ro.socket ● libvirtd-ro.socket - Libvirt local read-only socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-ro.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock-ro (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-admin.socket ● libvirtd-admin.socket - Libvirt admin socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-admin.socket; disabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-admin-sock (Stream)
Another missing setup in setup_for_ost.sh?
Never encountered it myself, but I can also try enabling the 'libvirtd.socket' service in the playbook.
After adding myself to qemu group and starting libvirtd sockets lago_init seems to work.
time run_tests Not sure which tests will run, and:
run_tc basic-suite-master/test-scenarios/001_initialize_engine.py
Which seems to run only one test module. There's also 'run_tests', which runs all the tests for you.
This seems useful but for one module I found this undocumented command:
python -B -m pytest -s -v -x --junit-xml=test.xml ${SUITE}/test-scenarios/name_test_pytest.py
This looks most promising, assuming that I can use -k test_name or -m marker to select only some tests for quick feedback. However due to the way OST is built, mixing setup and test code, when later tests depend on earlier setup "tests" I don't see how this is going to work with current suites.
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. (Also, mind you, the integration team is still not perfect - we also have bugs - so "mixing setup and test code" is not a clear cut - setup code is also test code, as it runs code that we want to test, and which sometimes fails).
That is true - the way OST works is that tests are dependent on previous tests. It's not nice, but this is how OST worked from the beginning - it's not feasible to set up a whole oVirt solution for each test case. Changing that requires a complete redesign of the project.
'run_tc' is still useful though. You can do 'run_tests' to run OST up to some point, i.e. force a failure with 'assert False' and work on your new test case with the whole setup hanging at some stage. There is no way currently to freeze and restore the the state of the environment though, so if you apply some intrusive changes you may need to rerun the suite from the beginning.
There used to be 'lago snapshot', no? I think I even used it at some point.
What is the difference between the ways, and which one is the right way?
'run_suite.sh' is used by CI and it brings all the legacy and the burden possible (el7/py2/keeping all the suites happy etc.). 'lagofy.sh' OTOH was written by Michal recently and it's more lightweight and user friendly. There is no right way - work with the one that suits you better.
My plan is to add a new storage suite that will run after some basic setup was done - engine, hosts, and storage are ready. Which tests scenarios are needed to reach this state?
Probably up until '002_bootstrap' finishes, but you have to find a point which satisfies you by yourself. If you think that there's too much stuff in that module, please make a suggestion to split it into separate ones.
Do we have any documentation on how to add a new suite? or my only reference is the network suite?
Unfortunately not, but feel free to reach me if you have any questions.
Thanks! (Still in my TODO list to try this - hopefully soon). -- Didi

On Wed, Nov 4, 2020 at 12:18 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
On 11/3/20 7:21 PM, Nir Soffer wrote:
On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk <msobczyk@redhat.com> wrote:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained. Awesome, thanks! So setup_for_ost.sh finished successfully (after more than an hour), but now I see conflicting documentation and comments about how to run test suites and how to cleanup after the run.
The docs say: https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/in...
./run_suite.sh basic-suite-4.0
But I see other undocumented ways in recent threads:
run_tests Trying the run_test option, from recent Mail:
. lagofy.sh lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa This fails:
$ . lagofy.sh Suite basic-suite-master - lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Add your group to qemu's group: "usermod -a -G qemu nsoffer"
setup_for_ost.sh should handle this, no? It does: https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b... Maybe you didn't relog so the group inclusion would be effective? But I agree there should be a message printed to the user if relogging is necessary - I will write a patch for it.
[nsoffer@ost ovirt-system-tests]$ lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k /usr/share/ost-images/el8_id_rsa Using images ost-images-el8-host-installed-1-202011021248.x86_64, ost-images-el8-engine-installed-1-202011021248.x86_64 containing ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch vdsm-4.40.35.1-1.el8.x86_64 @ Initialize and populate prefix: # Initialize prefix: * Create prefix dirs: * Create prefix dirs: Success (in 0:00:00) * Generate prefix uuid: * Generate prefix uuid: Success (in 0:00:00) * Copying ssh key: * Copying ssh key: Success (in 0:00:00) * Tag prefix as initialized: * Tag prefix as initialized: Success (in 0:00:00) # Initialize prefix: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: * Create disk root: * Create disk root: Success (in 0:00:00) * Create disk nfs: * Create disk nfs: Success (in 0:00:00) * Create disk iscsi: * Create disk iscsi: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: * Create disk root: * Create disk root: Success (in 0:00:00) # Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00) # Copying any deploy scripts: # Copying any deploy scripts: Success (in 0:00:00) # calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. # Missing current link, setting it to default @ Initialize and populate prefix: ERROR (in 0:00:01) Error occured, aborting Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main cli_plugins[args.verb].do_run(args) File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line 186, in do_run self._do_run(**vars(args)) File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init ssh_key=ssh_key, File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143, in virt_conf_from_stream ssh_key=ssh_key File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269, in virt_conf net_specs=conf['nets'], File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__ self._nets[name] = self._create_net(spec, compat) File "/usr/lib/python3.6/site-packages/lago/virt.py", line 113, in _create_net return cls(self, net_spec, compat=compat) File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/network.py", line 44, in __init__ name=env.uuid, File "/usr/lib/python3.6/site-packages/lago/providers/libvirt/utils.py", line 96, in get_libvirt_connection return libvirt.openAuth(libvirt_url, auth) File "/usr/lib64/python3.6/site-packages/libvirt.py", line 104, in openAuth if ret is None:raise libvirtError('virConnectOpenAuth() failed') libvirt.libvirtError: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
$ systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: inactive (dead) Docs: man:libvirtd(8) https://libvirt.org [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd.socket ● libvirtd.socket - Libvirt local socket Loaded: loaded (/usr/lib/systemd/system/libvirtd.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-ro.socket ● libvirtd-ro.socket - Libvirt local read-only socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-ro.socket; enabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-sock-ro (Stream) [nsoffer@ost ovirt-system-tests]$ systemctl status libvirtd-admin.socket ● libvirtd-admin.socket - Libvirt admin socket Loaded: loaded (/usr/lib/systemd/system/libvirtd-admin.socket; disabled; vendor preset: disabled) Active: inactive (dead) Listen: /run/libvirt/libvirt-admin-sock (Stream)
Another missing setup in setup_for_ost.sh? Never encountered it myself, but I can also try enabling the 'libvirtd.socket' service in the playbook.
After adding myself to qemu group and starting libvirtd sockets lago_init seems to work.
time run_tests Not sure which tests will run, and:
run_tc basic-suite-master/test-scenarios/001_initialize_engine.py
Which seems to run only one test module. There's also 'run_tests', which runs all the tests for you.
This seems useful but for one module I found this undocumented command:
python -B -m pytest -s -v -x --junit-xml=test.xml ${SUITE}/test-scenarios/name_test_pytest.py
This looks most promising, assuming that I can use -k test_name or -m marker to select only some tests for quick feedback. However due to the way OST is built, mixing setup and test code, when later tests depend on earlier setup "tests" I don't see how this is going to work with current suites.
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. Yeah, I had the same idea. It's not easy to implement it though. 'pytest' has a "tests are independent" design, so we would need to build something on top of that (or try to invent our own test framework, which is a very bad idea). But even with a dependency-resolving solution, there are tests that set something up just to bring it down in a moment (by design), so we'd probably need some kind of "provides" and "tears down" markers. Then you have the fact that some things take a lot of time and we do other stuff in between, while waiting - dependency resolving could force
On 11/4/20 11:29 AM, Yedidyah Bar David wrote: things to happen linearly and the run times could skyrocket... It's a complex subject that requires a serious think-through.
(Also, mind you, the integration team is still not perfect - we also have bugs - so "mixing setup and test code" is not a clear cut - setup code is also test code, as it runs code that we want to test, and which sometimes fails).
Exactly - setting up oVirt is a test itself for some parties.
That is true - the way OST works is that tests are dependent on previous tests. It's not nice, but this is how OST worked from the beginning - it's not feasible to set up a whole oVirt solution for each test case. Changing that requires a complete redesign of the project.
'run_tc' is still useful though. You can do 'run_tests' to run OST up to some point, i.e. force a failure with 'assert False' and work on your new test case with the whole setup hanging at some stage. There is no way currently to freeze and restore the the state of the environment though, so if you apply some intrusive changes you may need to rerun the suite from the beginning. There used to be 'lago snapshot', no? I think I even used it at some point.
Never heard of it, but indeed there is something like that! I'll try it out.
What is the difference between the ways, and which one is the right way? 'run_suite.sh' is used by CI and it brings all the legacy and the burden possible (el7/py2/keeping all the suites happy etc.). 'lagofy.sh' OTOH was written by Michal recently and it's more lightweight and user friendly. There is no right way - work with the one that suits you better.
My plan is to add a new storage suite that will run after some basic setup was done - engine, hosts, and storage are ready. Which tests scenarios are needed to reach this state? Probably up until '002_bootstrap' finishes, but you have to find a point which satisfies you by yourself. If you think that there's too much stuff in that module, please make a suggestion to split it into separate ones.
Do we have any documentation on how to add a new suite? or my only reference is the network suite? Unfortunately not, but feel free to reach me if you have any questions.
Thanks!
(Still in my TODO list to try this - hopefully soon).

Marcin Sobczyk <msobczyk@redhat.com> writes:
On 11/4/20 11:29 AM, Yedidyah Bar David wrote:
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. Yeah, I had the same idea. It's not easy to implement it though. 'pytest' has a "tests are independent" design, so we would need to build something on top of that (or try to invent our own test framework, which is a very bad idea). But even with a dependency-resolving solution, there are tests that set something up just to bring it down in a moment (by design), so we'd probably need some kind of "provides" and "tears down" markers. Then you have the fact that some things take a lot of time and we do other stuff in between, while waiting - dependency resolving could force things to happen linearly and the run times could skyrocket... It's a complex subject that requires a serious think-through.
Actually I was once thinking about introducing test dependencies in order to run independent tests in parallel and to speed up OST runs this way. The idea was that OST just waits on something at many places and it could run other tests in the meantime (we do some test interleaving in extreme cases but it's suboptimal and difficult to maintain). When arranging some things manually, I could achieve a significant speedup. But the problem is, of course, how to make an automated dependency management and handle all the possible situations and corner cases. It would be quite a lot of work, I think.

On 11/5/20 11:30 AM, Milan Zamazal wrote:
Marcin Sobczyk <msobczyk@redhat.com> writes:
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. Yeah, I had the same idea. It's not easy to implement it though. 'pytest' has a "tests are independent" design, so we would need to build something on top of that (or try to invent our own test
On 11/4/20 11:29 AM, Yedidyah Bar David wrote: framework, which is a very bad idea). But even with a dependency-resolving solution, there are tests that set something up just to bring it down in a moment (by design), so we'd probably need some kind of "provides" and "tears down" markers. Then you have the fact that some things take a lot of time and we do other stuff in between, while waiting - dependency resolving could force things to happen linearly and the run times could skyrocket... It's a complex subject that requires a serious think-through. Actually I was once thinking about introducing test dependencies in order to run independent tests in parallel and to speed up OST runs this way. The idea was that OST just waits on something at many places and it could run other tests in the meantime (we do some test interleaving in extreme cases but it's suboptimal and difficult to maintain). Yeah, I think I remember you did that during one of OST's hackathons.
When arranging some things manually, I could achieve a significant speedup. But the problem is, of course, how to make an automated dependency management and handle all the possible situations and corner cases. It would be quite a lot of work, I think.
Exactly. I.e. I can see there's [1], but of course that will work only on py3. The dependency management is something we'd have to implement and maintain on our own probably. Then of course we'd be introducing the test repeatability problem, since ordering of things for different runs might be different, which in current state of OST is something I'd like to avoid. [1] https://pypi.org/project/pytest-asyncio/

Marcin Sobczyk <msobczyk@redhat.com> writes:
On 11/5/20 11:30 AM, Milan Zamazal wrote:
Marcin Sobczyk <msobczyk@redhat.com> writes:
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. Yeah, I had the same idea. It's not easy to implement it though. 'pytest' has a "tests are independent" design, so we would need to build something on top of that (or try to invent our own test
On 11/4/20 11:29 AM, Yedidyah Bar David wrote: framework, which is a very bad idea). But even with a dependency-resolving solution, there are tests that set something up just to bring it down in a moment (by design), so we'd probably need some kind of "provides" and "tears down" markers. Then you have the fact that some things take a lot of time and we do other stuff in between, while waiting - dependency resolving could force things to happen linearly and the run times could skyrocket... It's a complex subject that requires a serious think-through. Actually I was once thinking about introducing test dependencies in order to run independent tests in parallel and to speed up OST runs this way. The idea was that OST just waits on something at many places and it could run other tests in the meantime (we do some test interleaving in extreme cases but it's suboptimal and difficult to maintain). Yeah, I think I remember you did that during one of OST's hackathons.
When arranging some things manually, I could achieve a significant speedup. But the problem is, of course, how to make an automated dependency management and handle all the possible situations and corner cases. It would be quite a lot of work, I think.
Exactly. I.e. I can see there's [1], but of course that will work only on py3.
py3 is the least problem.
The dependency management is something we'd have to implement and maintain on our own probably.
Yes, this is the hard part.
Then of course we'd be introducing the test repeatability problem, since ordering of things for different runs might be different, which in current state of OST is something I'd like to avoid.
It should be easy to have a switch between deterministic and non-deterministic ordering. Then one can use the fast, dynamic ordering for running tests more quickly and the suboptimal but deterministic ordering can be used for repeatability (on CI etc.). So this is not a real problem.

On 6 Nov 2020, at 11:29, Milan Zamazal <mzamazal@redhat.com> wrote:
Marcin Sobczyk <msobczyk@redhat.com> writes:
On 11/5/20 11:30 AM, Milan Zamazal wrote: Marcin Sobczyk <msobczyk@redhat.com> writes:
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. Yeah, I had the same idea. It's not easy to implement it though. 'pytest' has a "tests are independent" design, so we would need to build something on top of that (or try to invent our own test
On 11/4/20 11:29 AM, Yedidyah Bar David wrote: framework, which is a very bad idea). But even with a dependency-resolving solution, there are tests that set something up just to bring it down in a moment (by design), so we'd probably need some kind of "provides" and "tears down" markers. Then you have the fact that some things take a lot of time and we do other stuff in between, while waiting - dependency resolving could force things to happen linearly and the run times could skyrocket... It's a complex subject that requires a serious think-through. Actually I was once thinking about introducing test dependencies in order to run independent tests in parallel and to speed up OST runs this way. The idea was that OST just waits on something at many places and it could run other tests in the meantime (we do some test interleaving in extreme cases but it's suboptimal and difficult to maintain). Yeah, I think I remember you did that during one of OST's hackathons.
When arranging some things manually, I could achieve a significant speedup. But the problem is, of course, how to make an automated dependency management and handle all the possible situations and corner cases. It would be quite a lot of work, I think.
Exactly. I.e. I can see there's [1], but of course that will work only on py3.
py3 is the least problem.
The dependency management is something we'd have to implement and maintain on our own probably.
Yes, this is the hard part.
Then of course we'd be introducing the test repeatability problem, since ordering of things for different runs might be different, which in current state of OST is something I'd like to avoid.
It should be easy to have a switch between deterministic and non-deterministic ordering. Then one can use the fast, dynamic ordering for running tests more quickly and the suboptimal but deterministic ordering can be used for repeatability (on CI etc.). So this is not a real problem.
Why do you think it’s going to be significantly faster? I do not see much space for increasing it, at least not with the current set of tests. Actual tests that can run in parallel take cca 10 minutes. There’s initial install, backup/restore when you can’t run anything, storage operations ( if you try to parallelize them they only run slower(and fail)) You’re not going to gain much...
Devel mailing list -- devel@ovirt.org To unsubscribe send an email to devel-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LOQG7PATUE2OXC...

Michal Skrivanek <michal.skrivanek@redhat.com> writes:
On 6 Nov 2020, at 11:29, Milan Zamazal <mzamazal@redhat.com> wrote:
Marcin Sobczyk <msobczyk@redhat.com> writes:
On 11/5/20 11:30 AM, Milan Zamazal wrote: Marcin Sobczyk <msobczyk@redhat.com> writes:
Perhaps what you want, some day, is for the individual tests to have make-style dependencies? So you'll issue just a single test, and OST will only run the bare minimum for running it. Yeah, I had the same idea. It's not easy to implement it though. 'pytest' has a "tests are independent" design, so we would need to build something on top of that (or try to invent our own test
On 11/4/20 11:29 AM, Yedidyah Bar David wrote: framework, which is a very bad idea). But even with a dependency-resolving solution, there are tests that set something up just to bring it down in a moment (by design), so we'd probably need some kind of "provides" and "tears down" markers. Then you have the fact that some things take a lot of time and we do other stuff in between, while waiting - dependency resolving could force things to happen linearly and the run times could skyrocket... It's a complex subject that requires a serious think-through. Actually I was once thinking about introducing test dependencies in order to run independent tests in parallel and to speed up OST runs this way. The idea was that OST just waits on something at many places and it could run other tests in the meantime (we do some test interleaving in extreme cases but it's suboptimal and difficult to maintain). Yeah, I think I remember you did that during one of OST's hackathons.
When arranging some things manually, I could achieve a significant speedup. But the problem is, of course, how to make an automated dependency management and handle all the possible situations and corner cases. It would be quite a lot of work, I think.
Exactly. I.e. I can see there's [1], but of course that will work only on py3.
py3 is the least problem.
The dependency management is something we'd have to implement and maintain on our own probably.
Yes, this is the hard part.
Then of course we'd be introducing the test repeatability problem, since ordering of things for different runs might be different, which in current state of OST is something I'd like to avoid.
It should be easy to have a switch between deterministic and non-deterministic ordering. Then one can use the fast, dynamic ordering for running tests more quickly and the suboptimal but deterministic ordering can be used for repeatability (on CI etc.). So this is not a real problem.
Why do you think it’s going to be significantly faster? I do not see much space for increasing it, at least not with the current set of tests.
IIRC at the time I performed my experiments I could gain about 4 minutes from about 20-25 minutes of the non-repetitive part just by manually running a couple of things in parallel. And there were other obvious tasks that were waiting unnecessarily for other operations to finish. But this was a long time ago and things could indeed change significantly since then.
Actual tests that can run in parallel take cca 10 minutes. There’s initial install, backup/restore when you can’t run anything, storage operations ( if you try to parallelize them they only run slower(and fail)) You’re not going to gain much...
If all we can get is a couple of minutes at best from 30-60 minutes then it's irrelevant. At the same time, you provoke my curiosity about what OST is actually doing all the time. ;-)
Devel mailing list -- devel@ovirt.org To unsubscribe send an email to devel-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LOQG7PATUE2OXC...

Nir Soffer <nsoffer@redhat.com> writes:
So setup_for_ost.sh finished successfully (after more than an hour),
The long part is probably downloading OST images if you do it over a slow line. I don't think there is anything else there that takes a very long time to execute.

Would you consider presenting the flow of setting up the whole thing and running a test starting with a minimal CentOS 8 host? I would be happy to get it premiered on oVirt youtube channel and kept there for future references. This will enable more people willing to contribute to test their changes in local OST. Il giorno mar 3 nov 2020 alle ore 14:22 Marcin Sobczyk <msobczyk@redhat.com> ha scritto:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained.
Regards, Marcin
[1] https://gerrit.ovirt.org/#/c/111749/ _______________________________________________ Devel mailing list -- devel@ovirt.org To unsubscribe send an email to devel-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2V2OWSUTQS34Y...
-- Sandro Bonazzola MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://www.redhat.com/> *Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>*

Hi, On 11/5/20 6:16 PM, Sandro Bonazzola wrote:
Would you consider presenting the flow of setting up the whole thing and running a test starting with a minimal CentOS 8 host? I would be happy to get it premiered on oVirt youtube channel and kept there for future references. This will enable more people willing to contribute to test their changes in local OST. I'd be very happy to, but I don't think it's the right time yet. While the setup process should be smooth now, we have two competing ways of running suites. One is the 'run_suite.sh' script, which is bloated and we're trying to move away from it somehow, and the other is 'lagofy.sh', which is much more modern and lightweight, but still needs some polishing. I would like to spend some more time on 'lagofy.sh' to make the user experience better and only then make the tutorial.
Regards, Marcin
Il giorno mar 3 nov 2020 alle ore 14:22 Marcin Sobczyk <msobczyk@redhat.com <mailto:msobczyk@redhat.com>> ha scritto:
Hi All,
there are multiple pieces of information floating around on how to set up a machine for running OST. Some of them outdated (like dealing with el7), some of them more recent, but still a bit messy.
Not long ago, in some email conversation, Milan presented an ansible playbook that provided the steps necessary to do that. We've picked up the playbook, tweaked it a bit, made a convenience shell script wrapper that runs it, and pushed that into OST project [1].
This script, along with the playbook, should be our single-source-of-truth, one-stop solution for the job. It's been tested by a couple of persons and proved to be able to set up everything on a bare (rh)el8 machine. If you encounter any problems with the script please either report it on the devel mailing list, directly to me, or simply file a patch. Let's keep it maintained.
Regards, Marcin
[1] https://gerrit.ovirt.org/#/c/111749/ <https://gerrit.ovirt.org/#/c/111749/> _______________________________________________ Devel mailing list -- devel@ovirt.org <mailto:devel@ovirt.org> To unsubscribe send an email to devel-leave@ovirt.org <mailto:devel-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2V2OWSUTQS34Y... <https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2V2OWSUTQS34YVHSMQVQS4UPDUOKCQM/>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo@redhat.com <mailto:sbonazzo@redhat.com>
*Red Hat respects your work life balance. Therefore there is no need to answer this email out of your office hours. <https://mojo.redhat.com/docs/DOC-1199578>* *
*
participants (6)
-
Marcin Sobczyk
-
Michal Skrivanek
-
Milan Zamazal
-
Nir Soffer
-
Sandro Bonazzola
-
Yedidyah Bar David