Error Java SDK Issue??
by Geschwentner, Patrick
Dear Ladies and Gentlemen!
I am currently working with the java-sdk and I encountered a problem.
If I would like to retrieve the disk details, I get the following error:
Disk currDisk = ovirtConnection.followLink(diskAttachment.disk());
The Error is occurring in this line:
[cid:image001.png@01D44537.AF127FD0]
The getResponst looks quiet ok. (I inspected: [cid:image002.png@01D44537.AF127FD0] and it looks ok).
Error:
wrong number of arguments
The code is quiet similar to what you published on github (https://github.com/oVirt/ovirt-engine-sdk-java/blob/master/sdk/src/test/j... ).
Can you confirm the defect?
Best regards
Patrick
3 weeks
Re: Problem with mixing all the changes
by Nir Soffer
On Wed, May 13, 2020 at 12:19 PM FMGarcia <francisco.garcia(a)wbsgo.com> wrote:
Hi Fran, I'm moving the discussion to devel mailing list where it belongs.
> In https://gerrit.ovirt.org/#/c/107082/ we have "several problems" to decide this patch:
>
> At the base (current version in github), the synergy ('download_disk_snapshot.py' and 'upload_disk_snapshot.py') does not working:
>
> 'Download_disk_snapshot.py' only download volumes of a disk.
> 'Upload_disk_snapshot.py' requires: virtual machine configuration ('.ovf'), a only disk to upload in path './disks/xxxx', and manual action to attach disk to the vm.
>
> Then, I think that if you want a synergy with both scripts, we should change 'download_disk_snapshot.py' before that 'upload_disk_snapshot.py'. If not, you should edit 'upload_disk_snapshot.py' to add a variable 'vm_id'(as variable sd_name in this script) to attach the uploaded disk.
I agree. It would be nice if we can do:
$ mkdir -p backups/vm-id
$ download_disk_snapshots.py --backup-dir backups/vm-id ...
$ upload_disk_snapshots.py --backup-dir backups/vm-id ...
download_disk_snapshots.py will download vm ovf and all disks.
upload_disk_snaphsots.py
would take the output of download_disk_snapshots.py and create a new vm.
> I suppose that the best thing is to discard the gerrit, and to propose first what you want with 'download_disk_snapshot.py' and 'upload_disk_snapshot.py' and then act accordingly (several patch). Do you agree?
This is a bigger change that can take more time. I think we better fix
the issues in the current
scripts - the first one is the missing attach disk that you fix in your patch.
Since you posted this fix with a lot of other unrelated fixes (some
wrong or unneeded),
we cannot merge it. This is another reason to post minimal patches
that do one fix.
> I'm only truly interested in opened bug with block domain and volumes of > 1GB: https://bugzilla.redhat.com/show_bug.cgi?id=1707707. I make these changes to help a little since you would help me by solving the bug. I don't code in Python, I code in Java, using Java-sdk and the bug is a major limitation in my software, so I want resolve this bug (1 year old). =( I hope you understand. :)
Sure, I understand.
If you don't time to work on this, some other developer can take over
this patch.
The bug should be fixed by:
https://gerrit.ovirt.org/c/108991/
It would be nice if you can test this. I started a build here:
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/5867/
When the build is ready, you will be able to install engine from this
build by adidng
a yum repo with the baseurl:
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/5867/arti...
Note that this requires CentOS 8.1. If you want to test on CentOS 7,
you need to wait until
the fix will be backported to 4.3, or since you like Java, maybe port
it yourself?
Note also that we have a much more advanced backup and restore options:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/backup...
Here is example run I did yesterday:
I started with full backup of a running vm:
$ ./backup_vm.py full --engine-url https://engine3/ --username
admin@internal --password-file
/home/nsoffer/.config/ovirt/engine3/password --cafile
/home/nsoffer/Downloads/certs/engine3.pem --backup-dir
/home/nsoffer/tmp/backups/test-nfs
b5732b5c-37ee-4c66-b77e-bda5d37a10fe
[ 0.0 ] Starting full backup for VM b5732b5c-37ee-4c66-b77e-bda5d37a10fe
[ 1.5 ] Waiting until backup f73541c6-88d1-4dac-a551-da922cdb3f55 is ready
[ 4.6 ] Created checkpoint '4754dc34-da4b-4e62-84ea-164c413b003c'
(to use in --from-checkpoint-uuid for the next incremental backup)
[ 4.6 ] Creating image transfer for disk 566e6aa6-575b-4f83-88c9-e5e5b54d9649
[ 5.9 ] Waiting until transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790
will be ready
[ 5.9 ] Image transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790 is ready
[ 5.9 ] Transfer url:
https://host4:54322/images/13a0a396-5070-4b0f-a5cd-e2506c5abf0f
Formatting '/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2',
fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
refcount_bits=16
[ 100.00% ] 6.00 GiB, 18.34 seconds, 334.95 MiB/s
[ 24.3 ] Finalizing transfer 98e5aabc-fedb-4d2c-81c5-eed1a8b07790
[ 24.5 ] Full backup completed successfully
This downloads all vms disks to ~/tmp/backups/test-nfs/, creating
566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2
This file includes entire disk content at the time the backup was
stated. This includes
data from all snapshots.
Then I run incremental backup of the same vm, recoding the data changes since
the full backup:
$ ./backup_vm.py incremental --engine-url https://engine3/ --username
admin@internal --password-file
/home/nsoffer/.config/ovirt/engine3/password --cafile
/home/nsoffer/Downloads/certs/engine3.pem --backup-dir
/home/nsoffer/tmp/backups/test-nfs --from-checkpoint-uuid
4754dc34-da4b-4e62-84ea-164c413b003c
b5732b5c-37ee-4c66-b77e-bda5d37a10fe
[ 0.0 ] Starting incremental backup for VM
b5732b5c-37ee-4c66-b77e-bda5d37a10fe
[ 1.3 ] Waiting until backup 01a88749-06eb-431a-81f2-b03db24b878e is ready
[ 2.3 ] Created checkpoint '6f80d3c5-5b81-42ae-9700-2ccab37ad93b'
(to use in --from-checkpoint-uuid for the next incremental backup)
[ 2.3 ] Creating image transfer for disk 566e6aa6-575b-4f83-88c9-e5e5b54d9649
[ 3.4 ] Waiting until transfer 16c90052-9411-46f6-8dc6-b2f260206708
will be ready
[ 3.4 ] Image transfer 16c90052-9411-46f6-8dc6-b2f260206708 is ready
[ 3.4 ] Transfer url:
https://host4:54322/images/b9a44902-46f1-43b3-a9ad-9d72735c53ad
Formatting '/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2',
fmt=qcow2 size=6442450944 cluster_size=65536 lazy_refcounts=off
refcount_bits=16
[ 100.00% ] 6.00 GiB, 0.63 seconds, 9.52 GiB/s
[ 4.0 ] Finalizing transfer 16c90052-9411-46f6-8dc6-b2f260206708
[ 4.1 ] Incremental backup completed successfully
This backup is tiny since the only thing changed was new directory created
on the vm, and some system logs modified since the full backup.
Then I rebased the incremental backup on top of the full backup:
cd home/nsoffer/tmp/backups/test-nfs
qemu-img rebase -u -b
566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132336.full.qcow2 -F qcow2
566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
This images are now a valid qcow2 chain that can be uploaded using
upload_disk.py:
$ python3 upload_disk.py --engine-url https://engine3/ --username
admin@internal --password /home/nsoffer/.config/ovirt/engine3/password
--cafile /home/nsoffer/Downloads/certs/engine3.pem --disk-format qcow2
--disk-sparse --sd-name iscsi2-1
/home/nsoffer/tmp/backups/test-nfs/566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
Checking image...
Image format: qcow2
Disk format: cow
Disk content type: data
Disk provisioned size: 6442450944
Disk initial size: 2755264512
Disk name: 566e6aa6-575b-4f83-88c9-e5e5b54d9649.202005132347.incremental.qcow2
Connecting...
Creating disk...
Disk id: a9785777-8aac-4515-a47a-2f5126e3af73
Creating image transfer...
Transfer ID: 6e0384b6-730b-4416-a954-bf45e627d5cf
Transfer host: host4
Uploading image...
[ 100.00% ] 6.00 GiB, 20.50 seconds, 299.70 MiB/s
Finalizing image transfer...
Upload completed successfully
The result is a single qcow2 disk on the domain iscs2-1.
I created a new vm from this disk.
This backup script is not complete yet, we don't download the VM OVF
in each backup, and we don't
create the VM from the OVF. These features should be added later.
You may want to start testing and intergating this code instead of the
snapshot based download.
See https://www.ovirt.org/develop/release-management/features/storage/increme...
Nir
11 months
Cant get Sysprep to insert product key in Win2016 guest on RHV 4.3
by Lynn Dixon
I am trying to create a basic Windows server 2016 template in RHV 4.3 .
I've installed the OS from the Windows DVD, and installed all the RHV
tools, and drivers. Then I do a sysprep on my windows VM and make a
template from it.
I'd like to inject the product key when the template is cloned. I thought
if we define the product key
in /etc/ovirt-engine/osinfo.conf.d/10-productkeys.properties. with
something like this:
# Windows2016x64(29, OsType.Windows, true); os.windows_2016x64.id.value =
29 os.windows_2016x64.name.value = Windows 2016 x64
os.windows_2016x64.derivedFrom.value = windows_2012R2x64
os.windows_2016x64.cpu.unsupported.value = conroe, penryn, nehalem,
opteron_g1 os.windows_2016x64.sysprepPath.value =
${ENGINE_USR}/conf/sysprep/sysprep.2k16x64
os.windows_2016x64.productKey.value = MY-WIN-PRODUCT-KEY
That it would insert the product key when we tick the "Cloud Init /
Sysprep" box on the run-once dialog. Now, I know this is semi-working,
because all the other things I define in the run-once dialog after clicking
sysprep works: Admin password, hostname, etc. But NOT the product key!
I have even tried hard coding the product key in the sysprep.2k16x64 file
and no luck.
What am I doing wrong here?
*Lynn Dixon* | Red Hat Certified Architect #100-006-188
*Solutions Architect* | NA Commercial
Google Voice: 423-618-1414
Cell/Text: 423-774-3188
Click here to view my Certification Portfolio <http://red.ht/1XMX2Mi>
11 months
Re: package conflicts when executing dnf update
by Nir Soffer
On Fri, May 22, 2020 at 4:39 PM Stefan Wichmann <stefan(a)wichmann.ch> wrote:
> I have exactly the same problem. Fresh install of CentOS 8.1, did the install of oVirt-4.4-Release (added new host). And now I have this conflict…
Which conflict?
This looks like a reply, but there is no context from the original message.
11 months
Re: package conflicts when executing dnf update
by Stefan Wichmann
Hi,
I have exactly the same problem. Fresh install of CentOS 8.1, did the install of oVirt-4.4-Release (added new host). And now I have this conflict…
Is there any solution to this?
Kind regards,
Stefan
11 months
oVirt 4.4.0 Release is now generally available
by Sandro Bonazzola
oVirt 4.4.0 Release is now generally available
The oVirt Project is excited to announce the general availability of the
oVirt 4.4.0 Release, as of May 20th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Some of the features included in the oVirt 4.4.0 release require content
that will be available in CentOS Linux 8.2 but cannot be tested on RHEL 8.2
yet due to some incompatibility in the openvswitch package that is shipped
in CentOS Virt SIG, which requires rebuilding openvswitch on top of CentOS
8.2. The cluster switch type OVS is not implemented for CentOS 8 hosts.
Please note that oVirt 4.4 only supports clusters and datacenters with
compatibility version 4.2 and above. If clusters or datacenters are running
with an older compatibility version, you need to upgrade them to at least
4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, megaraid_sas driver is removed. If you use Enterprise Linux 8
hosts you can try to provide the necessary drivers for the deprecated
hardware using the DUD method (See users mailing list thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Installation instructions
For the engine: either use the oVirt appliance or install CentOS Linux 8
minimal by following these steps:
- Install the CentOS Linux 8 image from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...,
selecting the minimal installation.
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- Attach the host to the engine and let it be deployed.
Update instructionsUpdate from oVirt 4.4 Release Candidate
On the engine side and on CentOS hosts, you’ll need to switch from
ovirt44-pre to ovirt44 repositories.
In order to do so, you need to:
1.
dnf remove ovirt-release44-pre
2.
rm -f /etc/yum.repos.d/ovirt-4.4-pre-dependencies.repo
3.
rm -f /etc/yum.repos.d/ovirt-4.4-pre.repo
4.
dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
5.
dnf update
On the engine side you’ll need to run engine-setup only if you were not
already on the latest release candidate.
On oVirt Node, you’ll need to upgrade with:
1.
Move node to maintenance
2.
dnf install
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-im...
3.
Reboot
4.
Activate the host
Update from oVirt 4.3
oVirt 4.4 is available only for CentOS 8. In-place upgrades from previous
installations, based on CentOS 7, are not possible. For the engine, use
backup, and restore that into a new engine. Nodes will need to be
reinstalled.
A 4.4 engine can still manage existing 4.3 hosts, but you can’t add new
ones.
For a standalone engine, please refer to upgrade procedure at
https://ovirt.org/documentation/upgrade_guide/#Upgrading_from_4-3
If needed, run ovirt-engine-rename (see engine rename tool documentation at
https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html )
When upgrading hosts:
You need to upgrade one host at a time.
1.
Turn host to maintenance. Virtual machines on that host should migrate
automatically to a different host.
2.
Remove it from the engine
3.
Re-install it with el8 or oVirt Node as per installation instructions
4.
Re-add the host to the engine
Please note that you may see some issues live migrating VMs from el7 to
el8. If you hit such a case, please turn off the vm on el7 host and get it
started on the new el8 host in order to be able to move the next el7 host
to maintenance.
What’s new in oVirt 4.4.0 Release?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts.
-
Easier network management and configuration flexibility with
NetworkManager.
-
VMs based on a more modern Q35 chipset with legacy SeaBIOS and UEFI
firmware.
-
Support for direct passthrough of local host disks to VMs.
-
Live migration improvements for High Performance guests.
-
New Windows guest tools installer based on WiX framework now moved to
VirtioWin project.
-
Dropped support for cluster level prior to 4.2.
-
Dropped API/SDK v3 support deprecated in past versions.
-
4K block disk support only for file-based storage. iSCSI/FC storage do
not support 4K disks yet.
-
You can export a VM to a data domain.
-
You can edit floating disks.
-
Ansible Runner (ansible-runner) is integrated within the engine,
enabling more detailed monitoring of playbooks executed from the engine.
-
Adding and reinstalling hosts is now completely based on Ansible,
replacing ovirt-host-deploy, which is not used anymore.
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
[image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
11 months, 1 week