oVirt Engine UI error: changing Storage Domain mount options
by Jp
Hi,
I want to change the mount options to a Storage Domain that is already in use, but got an error from the Ovirt Engine's UI, and the mount point option change isn't getting applied.
What I tried:
1. Shutdown all VMs using Disks on Volume
2. Stopped storage Volume
3. Put storage Domain into Maintenance (so Storage -> Data Center -> select Domain -> Maintenance button)
4. Entered mount point options via Storage -> Domain -> select Domain -> Managed Domain -> "Mount Options" field
5. A pop-up in UI had same exact error as a Bug that had been fixed (https://bugzilla.redhat.com/show_bug.cgi?id=1273941)
6. Confirmed the "Mount Options" field had _actually_ kept my new mount point option!
6. Took Domain out of Maintenance
7. Started Volume
8. Checked oVirt Nodes mount point on CLI but my new option isn't listed
Is there a way I can change the mount point options via CLI? To get around this UI Bug ...
4 years, 6 months
Re: ETL service aggregation error
by Ayansh Rocks
Please find the attached error logs.
On Thu, Jun 4, 2020 at 8:17 PM Staniforth, Paul <
P.Staniforth(a)leedsbeckett.ac.uk> wrote:
> Hello Shashank,
> I can't see any of your images and also it
> would be better to have the log file as text.
>
> Regards,
> Paul S.
>
>
>
> ------------------------------
> *From:* Ayansh Rocks <shashank123rastogi(a)gmail.com>
> *Sent:* 04 June 2020 15:16
> *To:* users <users(a)ovirt.org>
> *Subject:* [ovirt-users] Re: ETL service aggregation error
>
>
> *Caution External Mail:* Do not click any links or open any attachments
> unless you trust the sender and know that the content is safe.
> Any update on this ?
>
> On Tue, May 26, 2020 at 1:41 PM Ayansh Rocks <shashank123rastogi(a)gmail.com>
> wrote:
>
> Hi,
>
> I am using 4.3.7 self hosted engine. From Few days i am getting regular
> below error messages :-
> [image: image.png]
>
> Logs in /var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
> [image: image.png]
>
> What could be the reason for this?
>
> Thanks
> Shashank
>
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
>
>
4 years, 6 months
import from qemu+tcp still working ?
by Nardus Geldenhuys
Hi oVirt Land
Hope you are all well and that you can help me.
I build a centos 8.2 vm on my local Fedora 32 qemu/kvm laptop. Then I ssh
to my ovirt host in the following manner: ssh ovirthost -R
16666:localhost:16509. I can then view the vm's on my local laptop with the
following configuration string: qemu+tcp://localhost:16666/system.
Eveything goes according to plan until I hit the import button. It fails
almost immediately. I even tried it on the command line and this is what I
get, it also error immediately:
virt-v2v -ic qemu+tcp://localhost:16666/system -o libvirt -os SOME_STORAGE
centos8_ovirt
virt-v2v: warning: no support for remote libvirt connections to '-ic
qemu+tcp://localhost:16666/system'. The conversion may fail when it tries
to read the source disks.
[ 0.0] Opening the source -i libvirt -ic
qemu+tcp://localhost:16666/system centos8_ovirt
[ 0.2] Creating an overlay to protect the source from being modified
qemu-img: /tmp/v2vovldd7ebf.qcow2: Could not open
'/home/libvirt/disks/centos8_ovirt.qcow2': No such file or directory
Could not open backing image to determine size.
virt-v2v: error: qemu-img command failed, see earlier errors
If reporting bugs, run virt-v2v with debugging enabled and include the
complete output:
virt-v2v -v -x [...]
This solution worked for me last year but I can't get it going. Is there
something obvious I am missing or is this just not working anymore?
Version information:
Fedora 32 with libvirtd:
libvirt-daemon-kvm-6.1.0-4.fc32.x86_64
qemu-kvm-4.2.0-7.fc32.x86_64
qemu-kvm-core-4.2.0-7.fc32.x86_64
Ovirt 4.3.8.2-1.el7
virt-v2v-1.40.2-9.el7.x86_64
Thanks
Nardus
4 years, 6 months
Fwd: Fwd: Issues with Gluster Domain
by C Williams
Resending to eliminate email issues
---------- Forwarded message ---------
From: C Williams <cwilliams3320(a)gmail.com>
Date: Thu, Jun 18, 2020 at 4:01 PM
Subject: Re: [ovirt-users] Fwd: Issues with Gluster Domain
To: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Here is output from mount
192.168.24.12:/stor/import0 on
/rhev/data-center/mnt/192.168.24.12:_stor_import0
type nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.12)
192.168.24.13:/stor/import1 on
/rhev/data-center/mnt/192.168.24.13:_stor_import1
type nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
192.168.24.13:/stor/iso1 on /rhev/data-center/mnt/192.168.24.13:_stor_iso1
type nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
192.168.24.13:/stor/export0 on
/rhev/data-center/mnt/192.168.24.13:_stor_export0
type nfs4
(rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=192.168.24.18,local_lock=none,addr=192.168.24.13)
192.168.24.15:/images on /rhev/data-center/mnt/glusterSD/192.168.24.15:_images
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
192.168.24.18:/images3 on
/rhev/data-center/mnt/glusterSD/192.168.24.18:_images3
type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
tmpfs on /run/user/0 type tmpfs
(rw,nosuid,nodev,relatime,seclabel,size=13198392k,mode=700)
[root@ov06 glusterfs]#
Also here is a screenshot of the console
[image: image.png]
The other domains are up
Import0 and Import1 are NFS . GLCL0 is gluster. They all are running VMs
Thank You For Your Help !
On Thu, Jun 18, 2020 at 3:51 PM Strahil Nikolov <hunter86_bg(a)yahoo.com>
wrote:
> I don't see '/rhev/data-center/mnt/192.168.24.13:_stor_import1' mounted
> at all .
> What is the status of all storage domains ?
>
> Best Regards,
> Strahil Nikolov
>
> На 18 юни 2020 г. 21:43:44 GMT+03:00, C Williams <cwilliams3320(a)gmail.com>
> написа:
> > Resending to deal with possible email issues
> >
> >---------- Forwarded message ---------
> >From: C Williams <cwilliams3320(a)gmail.com>
> >Date: Thu, Jun 18, 2020 at 2:07 PM
> >Subject: Re: [ovirt-users] Issues with Gluster Domain
> >To: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> >
> >
> >More
> >
> >[root@ov06 ~]# for i in $(gluster volume list); do echo $i;echo;
> >gluster
> >volume info $i; echo;echo;gluster volume status $i;echo;echo;echo;done
> >images3
> >
> >
> >Volume Name: images3
> >Type: Replicate
> >Volume ID: 0243d439-1b29-47d0-ab39-d61c2f15ae8b
> >Status: Started
> >Snapshot Count: 0
> >Number of Bricks: 1 x 3 = 3
> >Transport-type: tcp
> >Bricks:
> >Brick1: 192.168.24.18:/bricks/brick04/images3
> >Brick2: 192.168.24.19:/bricks/brick05/images3
> >Brick3: 192.168.24.20:/bricks/brick06/images3
> >Options Reconfigured:
> >performance.client-io-threads: on
> >nfs.disable: on
> >transport.address-family: inet
> >user.cifs: off
> >auth.allow: *
> >performance.quick-read: off
> >performance.read-ahead: off
> >performance.io-cache: off
> >performance.low-prio-threads: 32
> >network.remote-dio: off
> >cluster.eager-lock: enable
> >cluster.quorum-type: auto
> >cluster.server-quorum-type: server
> >cluster.data-self-heal-algorithm: full
> >cluster.locking-scheme: granular
> >cluster.shd-max-threads: 8
> >cluster.shd-wait-qlength: 10000
> >features.shard: on
> >cluster.choose-local: off
> >client.event-threads: 4
> >server.event-threads: 4
> >storage.owner-uid: 36
> >storage.owner-gid: 36
> >performance.strict-o-direct: on
> >network.ping-timeout: 30
> >cluster.granular-entry-heal: enable
> >
> >
> >Status of volume: images3
> >Gluster process TCP Port RDMA Port Online
> > Pid
>
> >------------------------------------------------------------------------------
> >Brick 192.168.24.18:/bricks/brick04/images3 49152 0 Y
> >6666
> >Brick 192.168.24.19:/bricks/brick05/images3 49152 0 Y
> >6779
> >Brick 192.168.24.20:/bricks/brick06/images3 49152 0 Y
> >7227
> >Self-heal Daemon on localhost N/A N/A Y
> >6689
> >Self-heal Daemon on ov07.ntc.srcle.com N/A N/A Y
> >6802
> >Self-heal Daemon on ov08.ntc.srcle.com N/A N/A Y
> >7250
> >
> >Task Status of Volume images3
>
> >------------------------------------------------------------------------------
> >There are no active volume tasks
> >
> >
> >
> >
> >[root@ov06 ~]# ls -l /rhev/data-center/mnt/glusterSD/
> >total 16
> >drwxr-xr-x. 5 vdsm kvm 8192 Jun 18 14:04 192.168.24.15:_images
> >drwxr-xr-x. 5 vdsm kvm 8192 Jun 18 14:05 192.168.24.18:_images3
> >[root@ov06 ~]#
> >
> >On Thu, Jun 18, 2020 at 2:03 PM C Williams <cwilliams3320(a)gmail.com>
> >wrote:
> >
> >> Strahil,
> >>
> >> Here you go -- Thank You For Your Help !
> >>
> >> BTW -- I can write a test file to gluster and it replicates properly.
> >> Thinking something about the oVirt Storage Domain ?
> >>
> >> [root@ov08 ~]# gluster pool list
> >> UUID Hostname State
> >> 5b40c659-d9ab-43c3-9af8-18b074ea0b83 ov06
> >Connected
> >> 36ce5a00-6f65-4926-8438-696944ebadb5 ov07.ntc.srcle.com
> >Connected
> >> c7e7abdb-a8f4-4842-924c-e227f0db1b29 localhost
> >Connected
> >> [root@ov08 ~]# gluster volume list
> >> images3
> >>
> >> On Thu, Jun 18, 2020 at 1:13 PM Strahil Nikolov
> ><hunter86_bg(a)yahoo.com>
> >> wrote:
> >>
> >>> Log to the oVirt cluster and provide the output of:
> >>> gluster pool list
> >>> gluster volume list
> >>> for i in $(gluster volume list); do echo $i;echo; gluster volume
> >info
> >>> $i; echo;echo;gluster volume status $i;echo;echo;echo;done
> >>>
> >>> ls -l /rhev/data-center/mnt/glusterSD/
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>> На 18 юни 2020 г. 19:17:46 GMT+03:00, C Williams
> ><cwilliams3320(a)gmail.com>
> >>> написа:
> >>> >Hello,
> >>> >
> >>> >I recently added 6 hosts to an existing oVirt compute/gluster
> >cluster.
> >>> >
> >>> >Prior to this attempted addition, my cluster had 3 Hypervisor hosts
> >and
> >>> >3
> >>> >gluster bricks which made up a single gluster volume (replica 3
> >volume)
> >>> >. I
> >>> >added the additional hosts and made a brick on 3 of the new hosts
> >and
> >>> >attempted to make a new replica 3 volume. I had difficulty
> >creating
> >>> >the
> >>> >new volume. So, I decided that I would make a new compute/gluster
> >>> >cluster
> >>> >for each set of 3 new hosts.
> >>> >
> >>> >I removed the 6 new hosts from the existing oVirt Compute/Gluster
> >>> >Cluster
> >>> >leaving the 3 original hosts in place with their bricks. At that
> >point
> >>> >my
> >>> >original bricks went down and came back up . The volume showed
> >entries
> >>> >that
> >>> >needed healing. At that point I ran gluster volume heal images3
> >full,
> >>> >etc.
> >>> >The volume shows no unhealed entries. I also corrected some peer
> >>> >errors.
> >>> >
> >>> >However, I am unable to copy disks, move disks to another domain,
> >>> >export
> >>> >disks, etc. It appears that the engine cannot locate disks properly
> >and
> >>> >I
> >>> >get storage I/O errors.
> >>> >
> >>> >I have detached and removed the oVirt Storage Domain. I reimported
> >the
> >>> >domain and imported 2 VMs, But the VM disks exhibit the same
> >behaviour
> >>> >and
> >>> >won't run from the hard disk.
> >>> >
> >>> >
> >>> >I get errors such as this
> >>> >
> >>> >VDSM ov05 command HSMGetAllTasksStatusesVDS failed: low level Image
> >>> >copy
> >>> >failed: ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t',
> >'none',
> >>> >'-T', 'none', '-f', 'raw',
> >>> >u'/rhev/data-center/mnt/glusterSD/192.168.24.18:
> >>>
>
> >_images3/5fe3ad3f-2d21-404c-832e-4dc7318ca10d/images/3ea5afbd-0fe0-4c09-8d39-e556c66a8b3d/fe6eab63-3b22-4815-bfe6-4a0ade292510',
> >>> >'-O', 'raw',
> >>> >u'/rhev/data-center/mnt/192.168.24.13:
> >>>
>
> >_stor_import1/1ab89386-a2ba-448b-90ab-bc816f55a328/images/f707a218-9db7-4e23-8bbd-9b12972012b6/d6591ec5-3ede-443d-bd40-93119ca7c7d5']
> >>> >failed with rc=1 out='' err=bytearray(b'qemu-img: error while
> >reading
> >>> >sector 135168: Transport endpoint is not connected\\nqemu-img:
> >error
> >>> >while
> >>> >reading sector 131072: Transport endpoint is not
> >connected\\nqemu-img:
> >>> >error while reading sector 139264: Transport endpoint is not
> >>> >connected\\nqemu-img: error while reading sector 143360: Transport
> >>> >endpoint
> >>> >is not connected\\nqemu-img: error while reading sector 147456:
> >>> >Transport
> >>> >endpoint is not connected\\nqemu-img: error while reading sector
> >>> >155648:
> >>> >Transport endpoint is not connected\\nqemu-img: error while reading
> >>> >sector
> >>> >151552: Transport endpoint is not connected\\nqemu-img: error while
> >>> >reading
> >>> >sector 159744: Transport endpoint is not connected\\n')",)
> >>> >
> >>> >oVirt version is 4.3.82-1.el7
> >>> >OS CentOS Linux release 7.7.1908 (Core)
> >>> >
> >>> >The Gluster Cluster has been working very well until this incident.
> >>> >
> >>> >Please help.
> >>> >
> >>> >Thank You
> >>> >
> >>> >Charles Williams
> >>>
> >>
>
4 years, 6 months
HostedEngine install failure (oVirt 4.4 & oVirt Node OS)
by Ian Easter
Hello folks,
Hoping I can trace this down here but kind of "out of the box" error going
on here.
Steps:
- Install oVirt Node OS
- Manual steps using ovirt-hosted-engine-setup
Might be a step I glanced over so I'm alright with a finger point and RTFM
statement. ;-)
Process fails out:
[ INFO ] TASK [ovirt.hosted_engine_setup : Obtain SSO token using
username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
false, "ovirt_hosts": [{"address": "mtl-hv-14.teve.inc", "affinity
_labels": [], "auto_numa_status": "unknown", "certificate":
{"organization": "teve.inc", "subject": "O=teve.inc,CN=mtl-hv-14.teve.inc"},
"cluster": {"href":
"/ovirt-engine/api/clusters/ba6daa62-b1a5-11ea-a207-00163e79d98c", "id":
"ba6daa62-b1a5-11ea-a207-00163e79d98c"}, "
comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough":
{"enabled": false}, "devices": [], "external_network_provider
_configurations": [], "external_status": "ok", "hardware_information":
{"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engin
e/api/hosts/e1399963-f520-4bdc-8ef0-832dc3d99ece", "id":
"e1399963-f520-4bdc-8ef0-832dc3d99ece", "katello_errata": [],
"kdump_status": "
unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory":
0, "name": "mtl-hv-14.teve.inc", "network_attachments": [], "
nics": [], "numa_nodes": [], "numa_supported": false, "os":
{"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321,
"power_management": {"automatic_pm_enabled": true, "enabled": false,
"kdump_detection": true, "pm_proxies": []}, "protocol": "stomp",
"se_linux": $}, "spm": {"priority": 5, "status": "none"}, "ssh":
{"fingerprint": "SHA256:rfVGiGz8dQU7Hr5irbd8N+xBkj94qWThArTokcSqGV8",
"port": 22}, $statistics": [], "status": "install_failed",
"storage_connection_extensions": [], "summary": {"total": 0}, "tags": [],
"transparent_hug$_pages": {"enabled": false}, "type": "rhel",
"unmanaged_networks": [], "update_available": false, "vgpu_placement":
"consolidated"}]}
...
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results$ please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
I've attached the ovirt-hosted-engine-setup log.
*Thank you,*
*Ian Easter*
4 years, 6 months
Hosted engine deployment fails consistently when trying to download files.
by Gilboa Davara
Hello,
I'm trying to deploy a hosted engine on one of my test setups.
No matter how I tried to deploy the hosted engine, either via command line
or via "Hosted Engine" deployment from the cockpit web console, I always
fails with the same error message. [1]
Manually trying to download RPMs via dnf from the host, work just fine.
Firewall log files are clean.
Any idea what's going on?
[1] 2020-06-12 06:09:38,609-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {'msg': "Failed to download metadata for
repo 'AppStream'", 'results': [], 'rc': 1, 'invocation': {'module_args':
{'name': ['ovirt-engine'], 'state': 'present', 'allow_downgrade': False,
'autoremove': False, 'bugfix': False, 'disable_gpg_check': False,
'disable_plugin': [], 'disablerepo': [], 'down load_only': False,
'enable_plugin': [], 'enablerepo': [], 'exclude': [], 'installroot': '/',
'install_repoquery': True, 'install_weak_deps': True, 'security': False,
'skip_broken': False, 'update_cache': False, 'update_only': False,
'validate_certs': True, 'lock_timeout': 30, 'conf_file': None,
'disable_excludes': None, 'download_dir': None, 'list': None, 'releasever':
None}}, '_ansible_no_log': False, 'changed ': False,
'_ansible_delegated_vars': {'ansible_host': 'test-vmengine.localdomain'}}
2020-06-12 06:09:38,709-0400 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost ->
gilboa-wx-vmovirt.localdomain]: FAILED! => {"changed": false, "msg":
"Failed to download metadata for repo 'AppStream'", "rc": 1, "results": []}
2020-06-12 06:09:39,711-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 PLAY RECAP [localhost] : ok: 183 changed:
57 unreachable: 0 skipped: 77 failed: 1
2020-06-12 06:09:39,812-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:215
ansible-playbook rc: 2
2020-06-12 06:09:39,812-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222
ansible-playbook stdout:
2020-06-12 06:09:39,812-0400 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:225
ansible-playbook stderr:
2020-06-12 06:09:39,812-0400 DEBUG otopi.context
context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
line 403, in _closeup
r = ah.run()
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 229, in run
raise RuntimeError(_('Failed executing ansible-playbook'))
- Gilboa
4 years, 6 months