[ANN] oVirt 4.3.3 Fourth Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.3 Fourth Release Candidate, as of April 12th, 2019.
This update is a release candidate of the third in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance will be available soon
- oVirt Node will be available soon[2]
Additional Resources:
* Read more about the oVirt 4.3.3 release highlights:
http://www.ovirt.org/release/4.3.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.3/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 8 months
Re: [Gluster-users] Gluster snapshot fails
by Strahil Nikolov
Hello All,
I have tried to enable debug and see the reason for the issue. Here is the relevant glusterd.log:
[2019-04-12 07:56:54.526508] E [MSGID: 106077] [glusterd-snapshot.c:1882:glusterd_is_thinp_brick] 0-management: Failed to get pool name for device systemd-1
[2019-04-12 07:56:54.527509] E [MSGID: 106121] [glusterd-snapshot.c:2523:glusterd_snapshot_create_prevalidate] 0-management: Failed to pre validate
[2019-04-12 07:56:54.527525] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV.
[2019-04-12 07:56:54.527539] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed
[2019-04-12 07:56:54.527552] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed
[2019-04-12 07:56:54.527568] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node
[2019-04-12 07:56:54.527583] E [MSGID: 106121] [glusterd-mgmt.c:2377:glusterd_mgmt_v3_initiate_snap_phases] 0-management: Pre Validation Failed
here is the output of lvscan & lvs:
[root@ovirt1 ~]# lvscan
ACTIVE '/dev/gluster_vg_md0/my_vdo_thinpool' [9.86 TiB] inherit
ACTIVE '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
ACTIVE '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
ACTIVE '/dev/gluster_vg_ssd/my_ssd_thinpool' [168.59 GiB] inherit
ACTIVE '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
ACTIVE '/dev/centos_ovirt1/swap' [6.70 GiB] inherit
ACTIVE '/dev/centos_ovirt1/home' [1.00 GiB] inherit
ACTIVE '/dev/centos_ovirt1/root' [60.00 GiB] inherit
[root@ovirt1 ~]# lvs --noheadings -o pool_lv
my_vdo_thinpool
my_vdo_thinpool
my_ssd_thinpool
[root@ovirt1 ~]# ssh ovirt2 "lvscan;lvs --noheadings -o pool_lv"
ACTIVE '/dev/gluster_vg_md0/my_vdo_thinpool' [<9.77 TiB] inherit
ACTIVE '/dev/gluster_vg_md0/gluster_lv_data' [500.00 GiB] inherit
ACTIVE '/dev/gluster_vg_md0/gluster_lv_isos' [50.00 GiB] inherit
ACTIVE '/dev/gluster_vg_ssd/my_ssd_thinpool' [<161.40 GiB] inherit
ACTIVE '/dev/gluster_vg_ssd/gluster_lv_engine' [40.00 GiB] inherit
ACTIVE '/dev/centos_ovirt2/root' [15.00 GiB] inherit
ACTIVE '/dev/centos_ovirt2/home' [1.00 GiB] inherit
ACTIVE '/dev/centos_ovirt2/swap' [16.00 GiB] inherit
my_vdo_thinpool
my_vdo_thinpool
my_ssd_thinpool
[root@ovirt1 ~]# ssh ovirt3 "lvscan;lvs --noheadings -o pool_lv"
ACTIVE '/dev/gluster_vg_sda3/gluster_thinpool_sda3' [41.00 GiB] inherit
ACTIVE '/dev/gluster_vg_sda3/gluster_lv_data' [15.00 GiB] inherit
ACTIVE '/dev/gluster_vg_sda3/gluster_lv_isos' [15.00 GiB] inherit
ACTIVE '/dev/gluster_vg_sda3/gluster_lv_engine' [15.00 GiB] inherit
ACTIVE '/dev/centos_ovirt3/root' [20.00 GiB] inherit
ACTIVE '/dev/centos_ovirt3/home' [1.00 GiB] inherit
ACTIVE '/dev/centos_ovirt3/swap' [8.00 GiB] inherit
gluster_thinpool_sda3
gluster_thinpool_sda3
gluster_thinpool_sda3
I am mounting my bricks via systemd , as I have issues with bricks being started before VDO.
[root@ovirt1 ~]# findmnt /gluster_bricks/isos
TARGET SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1 autofs rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt2 "findmnt /gluster_bricks/isos "
TARGET SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1 autofs rw,relatime,fd=26,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14279
/gluster_bricks/isos /dev/mapper/gluster_vg_md0-gluster_lv_isos xfs rw,noatime,nodiratime,seclabel,attr2,inode64,noquota
[root@ovirt1 ~]# ssh ovirt3 "findmnt /gluster_bricks/isos "
TARGET SOURCE FSTYPE OPTIONS
/gluster_bricks/isos systemd-1 autofs rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17770
/gluster_bricks/isos /dev/mapper/gluster_vg_sda3-gluster_lv_isos xfs rw,noatime,nodiratime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=1024,noquota
[root@ovirt1 ~]# grep "gluster_bricks" /proc/mounts
systemd-1 /gluster_bricks/data autofs rw,relatime,fd=22,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21513 0 0
systemd-1 /gluster_bricks/engine autofs rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21735 0 0
systemd-1 /gluster_bricks/isos autofs rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21843 0 0
/dev/mapper/gluster_vg_ssd-gluster_lv_engine /gluster_bricks/engine xfs rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=256,swidth=256,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_isos /gluster_bricks/isos xfs rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0
/dev/mapper/gluster_vg_md0-gluster_lv_data /gluster_bricks/data xfs rw,seclabel,noatime,nodiratime,attr2,inode64,noquota 0 0
Obviously , gluster is catching "systemd-1" as a device and tries to check if it's a thin LV.Where should I open a bug for that ?
P.S.: Adding oVirt User list.
Best Regards,Strahil Nikolov
В четвъртък, 11 април 2019 г., 4:00:31 ч. Гринуич-4, Strahil Nikolov <hunter86_bg(a)yahoo.com> написа:
Hi Rafi,
thanks for your update.
I have tested again with another gluster volume.[root@ovirt1 glusterfs]# gluster volume info isos
Volume Name: isos
Type: Replicate
Volume ID: 9b92b5bd-79f5-427b-bd8d-af28b038ed2a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/isos/isos
Brick2: ovirt2:/gluster_bricks/isos/isos
Brick3: ovirt3.localdomain:/gluster_bricks/isos/isos (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
Command run:
logrotate -f glusterfs ; logrotate -f glusterfs-georep; gluster snapshot create isos-snap-2019-04-11 isos description TEST
Logs:[root@ovirt1 glusterfs]# cat cli.log
[2019-04-11 07:51:02.367453] I [cli.c:769:main] 0-cli: Started running gluster with version 5.5
[2019-04-11 07:51:02.486863] I [MSGID: 101190] [event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2019-04-11 07:51:02.556813] E [cli-rpc-ops.c:11293:gf_cli_snapshot] 0-cli: cli_to_glusterd for snapshot failed
[2019-04-11 07:51:02.556880] I [input.c:31:cli_batch] 0-: Exiting with: -1
[root@ovirt1 glusterfs]# cat glusterd.log
[2019-04-11 07:51:02.553357] E [MSGID: 106024] [glusterd-snapshot.c:2547:glusterd_snapshot_create_prevalidate] 0-management: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of isos are thinly provisioned LV.
[2019-04-11 07:51:02.553365] W [MSGID: 106029] [glusterd-snapshot.c:8613:glusterd_snapshot_prevalidate] 0-management: Snapshot create pre-validation failed
[2019-04-11 07:51:02.553703] W [MSGID: 106121] [glusterd-mgmt.c:147:gd_mgmt_v3_pre_validate_fn] 0-management: Snapshot Prevalidate Failed
[2019-04-11 07:51:02.553719] E [MSGID: 106121] [glusterd-mgmt.c:1015:glusterd_mgmt_v3_pre_validate] 0-management: Pre Validation failed for operation Snapshot on local node
My LVs hosting the bricks are:[root@ovirt1 ~]# lvs gluster_vg_md0
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool 35.97
gluster_lv_isos gluster_vg_md0 Vwi-aot--- 50.00g my_vdo_thinpool 52.11
my_vdo_thinpool gluster_vg_md0 twi-aot--- 9.86t 2.04 11.45
[root@ovirt1 ~]# ssh ovirt2 "lvs gluster_vg_md0"
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_md0 Vwi-aot--- 500.00g my_vdo_thinpool 35.98
gluster_lv_isos gluster_vg_md0 Vwi-aot--- 50.00g my_vdo_thinpool 25.94
my_vdo_thinpool gluster_vg_md0 twi-aot--- <9.77t 1.93 11.39
[root@ovirt1 ~]# ssh ovirt3 "lvs gluster_vg_sda3"
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
gluster_lv_data gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3 0.17
gluster_lv_engine gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3 0.16
gluster_lv_isos gluster_vg_sda3 Vwi-aotz-- 15.00g gluster_thinpool_sda3 0.12
gluster_thinpool_sda3 gluster_vg_sda3 twi-aotz-- 41.00g 0.16 1.58
As you can see - all bricks are thin LV and space is not the issue.
Can someone hint me how to enable debug , so gluster logs can show the reason for that pre-check failure ?
Best Regards,Strahil Nikolov
В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chundattu Parambil <rkavunga(a)redhat.com> написа:
Hi Strahil,
The name of device is not at all a problem here. Can you please check the log of glusterd, and see if there is any useful information about the failure. Also please provide the output of `lvscan` and `lvs --noheadings -o pool_lv` from all nodes
Regards
Rafi KC
----- Original Message -----
From: "Strahil Nikolov" <hunter86_bg(a)yahoo.com>
To: gluster-users(a)gluster.org
Sent: Wednesday, April 10, 2019 2:36:39 AM
Subject: [Gluster-users] Gluster snapshot fails
Hello Community,
I have a problem running a snapshot of a replica 3 arbiter 1 volume.
Error:
[root@ovirt2 ~]# gluster snapshot create before-423 engine description "Before upgrade of engine from 4.2.2 to 4.2.3"
snapshot create: failed: Snapshot is supported only for thin provisioned LV. Ensure that all bricks of engine are thinly provisioned LV.
Snapshot command failed
Volume info:
Volume Name: engine
Type: Replicate
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1:/gluster_bricks/engine/engine
Brick2: ovirt2:/gluster_bricks/engine/engine
Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.enable-shared-storage: enable
All bricks are on thin lvm with plenty of space, the only thing that could be causing it is that ovirt1 & ovirt2 are on /dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine.
Is that the issue ? Should I rename my brick's VG ?
If so, why there is no mentioning in the documentation ?
Best Regards,
Strahil Nikolov
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
5 years, 8 months
Hosted-engine inaccessible
by Tau Makgaile
Hi,
I am currently experiencing a problem with my Hosted-engine. Hosted-engine
disconnected after increasing / partition. The increase went well but after
some time the hosted-enigine VM disconnected and has since been giving
alerts such as* re-initializingFSM*.
Though VMs underneth are running, Hosted-engine --vm-status:
*"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail"*
There is no backup to restore at the moment. I am looking for a way to
bring it up without redeploying the hosted engine.
Thanks in advance for your help.
Kind regards,
Tau
5 years, 8 months
oVirt Node 4.3 Master on fc28
by femi adegoke
4.3 Master is looking good!!
A few questions:
- In Step #2: Fqdns: for a 3 host cluster, are we to add the 2nd & 3rd
hosts?
- In Step #5: Bricks:
What effect does the RAID setting have?
Enable Dedupe & Compression is new?
Configure LV Cache: what do we enter for SSD? What happens if my disks
are already all SSD, do I still benefit from using this?
5 years, 8 months
Install of oVirt.hosted-engine-setup from galaxy fails
by Vrgotic, Marko
Dear oVirt team,
Install of role oVirt.hosted-engine-setup from Galaxy repo fails:
- downloading role 'hosted-engine-setup', owned by oVirt
[WARNING]: - oVirt.hosted-engine-setup was NOT installed successfully: - sorry, oVirt.hosted-engine-setup was not found on https://galaxy.ansible.com.
Apparently it fails due to name of the repo in the Galaxy which is with “_” instead of expected “-” .
Marko Vrgotic
5 years, 8 months
Re: HostedEngine cleaned up
by Strahil
The real image is defined within the XML stanza in the vdsm.log when the VM was last started .
So if you remember when the last time the HostedEngine was rebooted, you can check the vdsm.log on the host.
From there - check the cluster for the file.
If it's missing deploy the HostedEngine again (on the previous HostedEngine cluster volume or on a completely new).
Then try to import all Storages and then you will be able to import existing VMs.
Best Regards,
Strahil NikolovOn Apr 11, 2019 20:12, Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
>
> What happened is the engine's root filesystem had filled up. My colleague tried to resize the root lvm. The engine then did not come back. In trying to resolve that he cleaned up the engine and tried to re-install it, no luck in doing that.
>
> That brought down all the VMs. All VMs are down. we trying to move them into one of the standalone kvm host. We have been trying to locate the VM disk images, with no luck.
>
> According to the of the VM xml configuration file.the disk file is /rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685
>
> Unfortunately we can find it and the solution on teh forum states that we can only find in the associated logical volume, but i think only when teh vm is running.
>
> The disk images we have been trying to boot up from are the one's we got from the gluster bricks, but the are far small that real images and can't boot
>
>
> On Thu, Apr 11, 2019 at 6:13 PM Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>>
>>
>>
>> On Thu, Apr 11, 2019 at 9:46 AM Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
>>>
>>> Hi,
>>>
>>> We have a situation where the HostedEngine was cleaned up and the VMs are no longer running. Looking at the logs we can see the drive files as:
>>
>>
>> Do you have any guess on what really happened?
>> Are you sure that the disks really disappeared?
>>
>> Please notice that the symlinks under rhev/data-center/mnt/glusterSD/glustermount... are created on the fly only when needed.
>>
>> Are you sure that your host is correctly connecting the gluster storage domain?
>>
>>>
>>>
>>> 2019-03-26T07:42:46.915838Z qemu-kvm: -drive file=/rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685,format=qcow2,if=none,id=drive-ua-b2b872cd-b468-4f14-ae20-555ed823e84b,serial=b2b872cd-b468-4f14-ae20-555ed823e84b,werror=stop,rerror=stop,cache=none,aio=native: 'serial' is deprecated, please use the corresponding option of '-device' instead
>>>
>>> I assume this is the disk was writing to before it went down
5 years, 8 months
Re: Hosted-engine disconnected
by Strahil
Did you import your storages again after the redeploy ?
If yes, there is a tab that allows you to import your templates and VMs.
Most probably you will be able to get all of them almost without issues (in my case all VMs got under the default cluster).
You only need new glusterfs volume for the redeploy of the engine.
Still, have you booted your Engine and connected via VNC to check what happened ?
Also, keep in mind that the vdsm.log contains the XML of each VM started on the host.
That was how I booted my HostedEngine when the configuration was broken.
Create the following alias:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
With it you will be able to use virsh define XML-file
virsh start HostedEngine
And so on...
If you have an already running HostedEngine (new deploy), try first importing existing storage domains and their VMs.
If not, try to power on your HostedEngine with virsh and rescue DVD (for example CentOS 7 install DVD is a good way) and use the troubleshoot menu to fix it.
By the way , how did you extend the disks?
Best Regards,
Strahil NikolovOn Apr 11, 2019 18:33, Tau Makgaile <tau(a)sanren.ac.za> wrote:
>
> Hi,
>
> I have been experiencing a problem with my hosted-engine after increasing / partition. The increase went well for few minutes and disconnected. It started by showing bad head and it has since unreachable/ un-ping-able until. It was confusing because vms where still running. I decided to redeploy the engine with a hope that it will pull in the same vms into the dashboard. That did not give any positive results until i opted for a redeployment. things went well until i realized it will need to new glustermounts in order to go through, which also meant it risks losing information around my vms, which i have since stopped in order to allow the redeploment to carry on.
>
> I now need help to export all of the information around my vms, most importantly the disk images. I have been trying to get some disk images from /gluster_bricks/data/data/ with no luck in booting them up after conversion. I am thinking there might be information missing on this images since they boot into rescue mode.
>
> Please share more insight on how one can locate the entire disk information or the database where vms were last running.
>
> I did a check into log files, and did matching of the images with vms names but i have been unsuccessful.
>
> Thanks in advance for you reply,
>
> Kind regards,
> Tau
>
>
5 years, 8 months
Hosted-engine disconnected
by Tau Makgaile
Hi,
I have been experiencing a problem with my hosted-engine after increasing /
partition. The increase went well for few minutes and disconnected. It
started by showing bad head and it has since unreachable/
un-ping-able until. It was confusing because vms where still running. I
decided to redeploy the engine with a hope that it will pull in the same
vms into the dashboard. That did not give any positive results until i
opted for a redeployment. things went well until i realized it will need to
new glustermounts in order to go through, which also meant it risks losing
information around my vms, which i have since stopped in order to allow the
redeploment to carry on.
I now need help to export all of the information around my vms, most
importantly the disk images. I have been trying to get some disk images
from */gluster_bricks/data/data/ *with no luck in booting them up after
conversion. I am thinking there might be information missing on this images
since they boot into *rescue mode.*
Please share more insight on how one can locate the entire disk information
or the database where vms were last running.
I did a check into log files, and did matching of the images with vms names
but i have been unsuccessful.
Thanks in advance for you reply,
Kind regards,
Tau
5 years, 8 months