upload_disk.py - CLI Upload Disk
by Jorge Visentini
Hi All.
I'm using version 4.4.4 (latest stable version -
ovirt-node-ng-installer-4.4.4-2020122111.el8.iso)
I tried using the upload_disk.py script, but I don't think I knew how to
use it.
When I try to use it, these errors occur:
*python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
disk01-SO.qcow2 --disk-format qcow2 --sd-name ISOsusage: upload_disk.py
[-h] -c CONFIG [--debug] [--logfile LOGFILE]
[--disk-format {raw,qcow2}] [--disk-sparse]
[--enable-backup] --sd-name SD_NAME [--use-proxy]
[--max-workers MAX_WORKERS] [--buffer-size BUFFER_SIZE]
filenameupload_disk.py: error: the following arguments are required:
-c/--config*
Using the upload_disk.py help:
*python3 upload_disk.py --help-c CONFIG, --config CONFIG
Use engine connection details from [CONFIG] section in
~/.config/ovirt.conf.*
This CONFIG, is the API access configuration for authentication? Because
analyzing the script I did not find this information.
Does this new version work differently or am I doing something wrong?
In the sdk_4.3 version of upload_disk.py I had to change the script to add
the access information, but it worked.
*[root@engineteste01 ~]# python3 upload_disk.py disk01-SO.qcow2Checking
image...Disk format: qcow2Disk content type: dataConnecting...Creating
disk...Creating transfer session...Uploading image...Uploaded
20.42%Uploaded 45.07%Uploaded 68.89%Uploaded 94.45%Uploaded 2.99g in 42.17
seconds (72.61m/s)Finalizing transfer session...Upload completed
successfully[root@engineteste01 ~]#*
Thank you all!!
--
Att,
Jorge Visentini
+55 55 98432-9868
3 years, 11 months
Windows 7 vm lost network connection under heavy network load
by Joey Ma
Hi folks,
Happy holidays.
I'm having an urgent problem :smile: .
I've installed oVirt 4.4.2 on CentOS 8.2 and then created several Windows 7
vms for stress testing. I found that the heavy network load would lead to
the e1000 net card NOT able to receive packets, it seemed totally blocked.
In the meantime, packet sending was fine.
Only re-enabling the net card can restore the network. Has anyone also had
this problem? Looking forward to your insights. Much appreciated.
Best regards,
Joey
3 years, 11 months
Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)
by Gilboa Davara
Hello all,
I'm more-or-less finished building a new ovirt over glusterfs cluster with
3 fairly beefy servers.
Nodes were fully upgraded to CentOS Linux release 8.3.2011 before they
joined the cluster.
Looking at the cluster view in the WebUI, I get an exclamation mark with
the following message: "Upgrade cluster compatibility level".
When I try to upgrade the cluster, 2 of the 3 hosts go into maintenance and
reboot, but once the procedure is complete, the cluster version remains the
same.
Looking at the host vdsm logs, I see that once the engine refreshes their
capabilities, all hosts return 4.2-4.4 and not 4.5.
E.g.
'supportedENGINEs': ['4.2', '4.3', '4.4'], 'clusterLevels': ['4.2', '4.3',
'4.4']
I assume I should be seeing 4.5 after the upgrade, no?
AmI missing something?
Thanks,
- Gilboa
3 years, 11 months
Best Practice? Affinity Rules Enforcement Manager or High Availability?
by souvaliotimaria@mail.com
Hello everyone,
Not sure if I should ask this here as it seems to be a pretty obvious question but here it is.
What is the best solution for making your VMs able to automatically boot up on another working host when something goes wrong (gluster problem, non responsive host etc)? Would you enable the Affinity Manager and enforce some policies or would you set the VMs you want as Highly Available?
Thank you very much for your time!
Best regards,
Maria Souvalioti
3 years, 11 months
Re: Breaking up a oVirt cluster on storage domain boundary.
by Strahil Nikolov
> Can I migrate storage domains, and thus all the VMs within that
> storage domain?
>
>
>
> Or will I need to build new cluster, with new storage domains, and
> migrate the VMs?
>
>
Actually you can create a new cluster and ensure that the Storage
domains are accessible by that new cluster.
Then to migrate, you just need to power off the VM, Edit -> change
cluster, network, etc and power it up.
It will start on the hosts in the new cluster and then you just need to
verify that the application is working properly.
Best Regards,
Strahil Nikolov
3 years, 11 months
Breaking up a oVirt cluster on storage domain boundary.
by Matthew.Stier@fujitsu.com
Is it possible to break up an oVirt cluster into multiple clusters?
I have an Oracle Linux Virtualization Manager 4.3.6 cluster (think oVirt 4.3.6) that is hosting four different classes of VM's.
I have acquired some additional hosts, and instead of adding them to my "default" cluster, I want to create three new clusters, and migrate each class of VM to its own cluster.
Each class has its own network, and storage domains.
Can I migrate storage domains, and thus all the VMs within that storage domain?
Or will I need to build new cluster, with new storage domains, and migrate the VMs?
Most of my VMs are built from templates. I assume that those cloned from templates will not be an issue. But some of my classes are thin provisioned, and I suspect I will have an issue with migrating those at the VM level, which is why I want to migrate them at the storage domain level.
3 years, 11 months
Shrink iSCSI Domain
by Vinícius Ferrão
Hello,
Is there any way to reduce the size of an iSCSI Storage Domain? I can’t seem to figure this myself. It’s probably unsupported, and the path would be create a new iSCSI Storage Domain with the reduced size and move the virtual disks to there and them delete the old one.
But I would like to confirm if this is the only way to do this…
In the past I had a requirement, so I’ve created the VM Domains with 10TB, now it’s just too much, and I need to use the space on the storage for other activities.
Thanks all and happy new year.
3 years, 11 months
"POSIX storage based vm live migration failed"
by Tarun Kushwaha
After upgrading cluster version to 4.5 VM live migration failed showing error due to storage I/O error and VM status is being changed running to Pause , lower cluster version same is working
3 years, 11 months