On Thu, Nov 5, 2020 at 12:39 PM Marcin Sobczyk <msobczyk(a)redhat.com> wrote:
On 11/5/20 9:09 AM, Yedidyah Bar David wrote:
> On Wed, Nov 4, 2020 at 9:49 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>> I want to share useful info from the OST hackathon we had this week.
>>
>> Image transfer must work with real hostnames to allow server
>> certificate verification.
>> Inside the OST environment, engine and hosts names are resolvable, but
>> on the host
>> (or vm) running OST, the names are not available.
Do we really need this? Can't we execute those image transfers on the
host VMs instead?
>>
>> This can be fixed by adding the engine and hosts to /etc/hosts like this:
>>
>> $ cat /etc/hosts
>> 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>
>> 192.168.200.2 engine
>> 192.168.200.3 lago-basic-suite-master-host-0
>> 192.168.200.4 lago-basic-suite-master-host-1
Modifying '/etc/hosts' requires root privileges - it will work in mock,
but nowhere else and IMO is a bad idea.
> Are these addresses guaranteed to be static?
>
> Where are they defined?
No, they're not and I think it's a good thing - if we end up assigning
them statically
sooner or later we will stumble upon "this always worked because we
always used the
same ip addresses" bugs.
Libvirt runs 'dnsmasq' that the VMs used. The XML definition for DNS is
done by lago [1].
>
>> It would be if this was automated by OST. You can get the details using:
>>
>> $ cd src/ovirt-system-tests/deployment-xxx
>> $ lago status
> It would have been even nicer if it was possible/easy to have this working
> dynamically without user intervention.
>
> Thought about and searched for ways to achieve this, failed to find something
> simple.
>
> Closest options I found, in case someone feels like playing with this:
>
> 1. Use HOSTALIASES. 'man 7 hostname' for details, or e.g.:
>
>
https://blog.tremily.us/posts/HOSTALIASES/
>
> With this, if indeed the addresses are static, but you do not want to have
> them hardcoded in /etc (say, because you want different ones per different
> runs/needs/whatever), you can add them hardcoded there with some longer
> name, and have a process-specific HOSTALIASES file mapping e.g. 'engine'
> to the engine of this specific run.
'HOSTALIASES' is really awesome, but it doesn't always work.
I found that for machines connected to VPNs, the overriden DNS
servers will have the priority in responding to name resolution
and 'HOSTALIASES' definitions won't have effect.
Really? Weird. I never tried HOSTALIASES so far (just tested it
before sending above and it worked, also with vpn up), but I agree
it's not the way to go, especially since I agree with you that we
should not rely on the static addresses.
>
> 2.
https://github.com/fritzw/ld-preload-open
>
> With this, you can have a process-specific /etc/resolv.conf, pointing
> this specific process to the internal nameserver inside lago/OST.
> This requires building this small C library. Didn't try it or check
> its code. Also can't find it pre-built in copr (or anywhere).
>
> (
> Along the way, if you like such tricks, found this:
>
>
https://github.com/gaul/awesome-ld-preload
> )
This sounds really complex.
I do not think it's that complex, but agree that I'd prefer to not introduce
a C sources to OST, if possible. But I can't see easy means for that.
As mentioned before I would prefer
if things could be done on the host VMs instead.
Of course they can, but it's not convenient.
You do not have your normal environment there - your IDE, configuration, etc.
You can't run a browser there.
Etc.
IMO OST should be made easy to interact with from your main development machine.
>> OST keeps the deployment directory in the source directory. Be careful if you
>> like to "git clean -dxf' since it will delete all the deployment and
>> you will have to
>> kill the vms manually later.
This is true, but there are reasons behind that - the way mock works
and libvirt permissions that are needed to operate on VMs images.
>>
>> The next thing we need is the engine ca cert. It can be fetched like this:
>>
>> $ curl -k
'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA'
>>> ca.pem
>> I would expect OST to do this and put the file in the deployment directory.
We have that already [2].
>>
>> To upload or download images, backup vms or use other modern examples from
>> the sdk, you need to have a configuration file like this:
>>
>> $ cat ~/.config/ovirt.conf
>> [engine]
>> engine_url =
https://engine
>> username = admin@internal
>> password = 123
>> cafile = ca.pem
>>
>> With this uploading from the same directory where ca.pem is located
>> will work. If you want
>> it to work from any directory, use absolute path to the file.
>>
>> I created a test image using qemu-img and qemu-io:
>>
>> $ qemu-img create -f qcow2 test.qcow2 1g
>>
>> To write some data to the test image we can use qemu-io. This writes 64k of
data
>> (b"\xf0" * 64 * 1024) to offset 1 MiB.
>>
>> $ qemu-io -f qcow2 -c "write -P 240 1m 64k" test.qcow2
> Never heard about qemu-io. Nice to know. Seems like it does not have a manpage,
> in el8, although I can find such a manpage elsewhere on the net.
>
>> Since this image contains only 64k of data, uploading it should be instant.
>>
>> The last part we need is the imageio client package:
>>
>> $ dnf install ovirt-imageio-client
>>
>> To upload the image, we need at least one host up and storage domains
>> created. I did not find a way to prepare OST, so simply run this after
>> run_tests completed. It took about an hour.
>>
>> To upload the image to raw sparse disk we can use:
>>
>> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
>> -c engine --sd-name nfs --disk-sparse --disk-format raw test.qcow2
>> [ 0.0 ] Checking image...
>> [ 0.0 ] Image format: qcow2
>> [ 0.0 ] Disk format: raw
>> [ 0.0 ] Disk content type: data
>> [ 0.0 ] Disk provisioned size: 1073741824
>> [ 0.0 ] Disk initial size: 1073741824
>> [ 0.0 ] Disk name: test.raw
>> [ 0.0 ] Disk backup: False
>> [ 0.0 ] Connecting...
>> [ 0.0 ] Creating disk...
>> [ 36.3 ] Disk ID: 26df08cf-3dec-47b9-b776-0e2bc564b6d5
>> [ 36.3 ] Creating image transfer...
>> [ 38.2 ] Transfer ID: de8cfac9-ead2-4304-b18b-a1779d647716
>> [ 38.2 ] Transfer host name: lago-basic-suite-master-host-1
>> [ 38.2 ] Uploading image...
>> [ 100.00% ] 1.00 GiB, 1.79 seconds, 571.50 MiB/s
>> [ 40.0 ] Finalizing image transfer...
>> [ 44.1 ] Upload completed successfully
>>
>> I uploaded this before I added the hosts to /etc/hosts, so the upload
>> was done via the proxy.
>>
>> Yes, it took 36 seconds to create the disk.
>>
>> To download the disk use:
>>
>> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
>> -c engine 5ac63c72-6296-46b1-a068-b1039c8ecbd1 downlaod.qcow2
>> [ 0.0 ] Connecting...
>> [ 0.2 ] Creating image transfer...
>> [ 1.6 ] Transfer ID: a99e2a43-8360-4661-81dc-02828a88d586
>> [ 1.6 ] Transfer host name: lago-basic-suite-master-host-1
>> [ 1.6 ] Downloading image...
>> [ 100.00% ] 1.00 GiB, 0.32 seconds, 3.10 GiB/s
>> [ 1.9 ] Finalizing image transfer...
>>
>> We can verify the transfers using checksums. Here we create a checksum
>> of the remote
>> disk:
>>
>> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_disk.py
>> -c engine 26df08cf-3dec-47b9-b776-0e2bc564b6d5
>> {
>> "algorithm": "blake2b",
>> "block_size": 4194304,
>> "checksum":
>> "a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
>> }
>>
>> And checksum of the downloaded image - they should match:
>>
>> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_image.py
>> downlaod.qcow2
>> {
>> "algorithm": "blake2b",
>> "block_size": 4194304,
>> "checksum":
"a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
>> }
>>
>> Same upload to iscsi domain, using qcow2 format:
>>
>> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
>> -c engine --sd-name iscsi --disk-sparse --disk-format qcow2 test.qcow2
>> [ 0.0 ] Checking image...
>> [ 0.0 ] Image format: qcow2
>> [ 0.0 ] Disk format: cow
>> [ 0.0 ] Disk content type: data
>> [ 0.0 ] Disk provisioned size: 1073741824
>> [ 0.0 ] Disk initial size: 458752
>> [ 0.0 ] Disk name: test.qcow2
>> [ 0.0 ] Disk backup: False
>> [ 0.0 ] Connecting...
>> [ 0.0 ] Creating disk...
>> [ 27.8 ] Disk ID: e7ef253e-7baa-4d4a-a9b2-1a6b7db13f41
>> [ 27.8 ] Creating image transfer...
>> [ 30.0 ] Transfer ID: 88328857-ac99-4ee1-9618-6b3cd14a7db8
>> [ 30.0 ] Transfer host name: lago-basic-suite-master-host-0
>> [ 30.0 ] Uploading image...
>> [ 100.00% ] 1.00 GiB, 0.31 seconds, 3.28 GiB/s
>> [ 30.3 ] Finalizing image transfer...
>> [ 35.4 ] Upload completed successfully
>>
>> Again, creating the disk is very slow, not sure why. Probably having a storage
>> server on a nested vm is not a good idea.
>>
>> We can compare the checksum with the source image since checksum are computed
>> from the guest content:
>>
>> [nsoffer@ost ~]$ python3
>> /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_disk.py -c
>> engine e7ef253e-7baa-4d4a-a9b2-1a6b7db13f41
>> {
>> "algorithm": "blake2b",
>> "block_size": 4194304,
>> "checksum":
>> "a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
>> }
>>
>> Finally, we can try real images using virt-builder:
>>
>> $ virt-builder fedora-32
>>
>> Will create a new Fedora 32 server image in the current directory. See
>> --help for many
>> useful options to create different format, set root password, or
>> install packages.
>>
>> Uploading this image is much slower:
>>
>> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
>> -c engine --sd-name nfs --disk-sparse --disk-format raw fedora-32.img
>> [ 0.0 ] Checking image...
>> [ 0.0 ] Image format: raw
>> [ 0.0 ] Disk format: raw
>> [ 0.0 ] Disk content type: data
>> [ 0.0 ] Disk provisioned size: 6442450944
>> [ 0.0 ] Disk initial size: 6442450944
>> [ 0.0 ] Disk name: fedora-32.raw
>> [ 0.0 ] Disk backup: False
>> [ 0.0 ] Connecting...
>> [ 0.0 ] Creating disk...
>> [ 36.8 ] Disk ID: b17126f3-fa03-4c22-8f59-ef599b64a42e
>> [ 36.8 ] Creating image transfer...
>> [ 38.5 ] Transfer ID: fe82fb86-b87a-4e49-b9cd-f1f4334e7852
>> [ 38.5 ] Transfer host name: lago-basic-suite-master-host-0
>> [ 38.5 ] Uploading image...
>> [ 100.00% ] 6.00 GiB, 99.71 seconds, 61.62 MiB/s
>> [ 138.2 ] Finalizing image transfer...
>> [ 147.8 ] Upload completed successfully
>>
>> At the current state of OST, we should avoid such long tests.
> Did you try to check if it's indeed actually plain IO that's
> slowing disk creation? Perhaps it's something else?
>
>> Using backup_vm.py and other examples should work in the same way.
>>
>> I posted this patch to improve nfs performance, please review:
>>
https://gerrit.ovirt.org/c/112067/
I noticed that many of you run OST in a VM ending up with three layers
of VMs.
I know it works, but I got multiple reports of assertions' timeouts and
TBH I just don't
see this as a viable solution to work with OST - you need a bare metal
for that.
Why?
After all, we also work on a virtualization product/project. If it's
not good enough for ourselves, how do we expect others to use it? :-)
Also, using bare-metal isn't always that easy/comfortable either, even
if you have the hardware.
CI also uses VMs for this, IIUC. Or did we move there to containers?
Perhaps we should invest in making this work well inside a container.
On my bare metal server OST basic run takes 30 mins to complete. This
is
something one
can work with, but we can do even better.
Thank you for your input and I hope that we can have more people
involved in OST
on a regular basis and not once-per-year hackathons. This is a complex
project, but it's
really useful.
+1!
Thanks and best regards,
--
Didi