Re: [Gluster-users] ACL issue v6.6, v6.7, v7.1, v7.2
by Paolo Margara
Hi,
this is interesting, this happen always with gluster 6.6 or only in
certain cases?
I ask this because I have two ovirt clusters with gluster, both with
gluster v6.6, in one case I've upgraded from 6.5 to 6.6 as Strahil, and
I haven't hit this bug.
When upgrading my clusters I follow exactly the steps reported by
Strahil in the bug report, some other things should be different.
Greetings,
Paolo
Il 06/02/20 23:30, Christian Reiss ha scritto:
> Hey,
>
> I hit this bug, too. With disastrous results.
> I second this post.
>
>
> On 06.02.2020 19:59, Strahil Nikolov wrote:
>> Hello List,
>>
>> Recently I had upgraded my oVirt + Gluster (v6.5 -> v6.6) and I hit
>> an ACL bug , which forced me to upgrade to v7.0
>>
>> Another oVirt user also hit the bug when upgrading to v6.7 and he
>> had to rebuild his gluster cluster.
>>
>> Sadly, the fun doesn't stop here. Last week I have tried upgrading
>> to v7.2 and again the ACL bug hit me. Downgrading to v7.1 doesn't
>> help - so I downgraded to 7.0 and everything is operational.
>>
>> The bug report for the last issue:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1797099
>>
>> I have 2 questions :
>> 1. What is gluster's ACL evaluating/checking ? Is there an option to
>> force gluster not to support ACL at all ?
>>
>> 2. Are you aware of the issue? That bug was supposed to be fixed a
>> long time ago, yet the facts speak different.
>>
>> Thanks for reading this long post.
>>
>> Best Regards,
>> Strahil Nikolov
>> ________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/441850968
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users(a)gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
4 years, 10 months
OVN issues: no connectivity between VMs on same host
by thkam@hua.gr
Hi
I am getting some problems setting up an internal networks for our oVirt VMs using the OVN external provider.
The ovn-provider was setup during the ovirt engine installation (version 4.3.7.2-1.el7). I then just created a new network with the ovn external provider and tested the connectivity using the corresponding button on the oVirt UI. Two of my VMs are Ubuntu 18.04 servers with ips 10.0.0.101 and 10.0.0.102 with gateways 10.0.0.102 and 10.0.0.101 respectively. So I guess under normal circumstances they should be able to ping each other.
I tried disabling the firewalld service on both the host and the ovirtengine but still nothing changed. However I noticed something odd on the firewalld status on the host. With firewalld enabled, I get:
[root@ovirt7 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Πεμ 2020-02-06 06:10:38 UTC; 14s ago
Docs: man:firewalld(1)
Main PID: 4436 (firewalld)
Tasks: 2
CGroup: /system.slice/firewalld.service
└─4436 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet1 -j libvirt-O-vnet1' failed: Illegal target name 'libvirt-O-vnet1'.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-I-vnet1' failed: Chain 'libvirt-I-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-I-vnet1' failed: Chain 'libvirt-I-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet1' failed: Chain 'libvirt-O-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet1' failed: Chain 'libvirt-O-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -E libvirt-P-vnet1 libvirt-O-vnet1' failed: Chain 'libvirt-P-vnet1' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F I-vnet1-mac' failed: Chain 'I-vnet1-mac' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X I-vnet1-mac' failed: Chain 'I-vnet1-mac' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F I-vnet1-arp-mac' failed: Chain 'I-vnet1-arp-mac' doesn't exist.
Φεβ 06 06:10:39 ovirt7.hua.gr firewalld[4436]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X I-vnet1-arp-mac' failed: Chain 'I-vnet1-arp-mac' doesn't exist.
On the ovirt engine I have:
[root@ovirtengine ~]# ovn-sbctl show
Chassis "0e225912-6318-4bc3-9c94-8f2ab937876d"
hostname: "ovirt3.hua.gr"
Encap geneve
ip: "10.100.59.53"
options: {csum="true"}
Chassis "47ff3058-effb-43c2-b9e0-9eaf6c72b1c2"
hostname: "ovirt5.hua.gr"
Encap geneve
ip: "10.100.59.51"
options: {csum="true"}
Chassis "1fcca34f-6017-4882-8c13-4835dad03387"
hostname: localhost
Encap geneve
ip: "10.100.59.55"
options: {csum="true"}
Chassis "25d35968-6c3c-4040-ab85-d881d3d524e4"
hostname: "ovirt4.hua.gr"
Encap geneve
ip: "10.100.59.52"
options: {csum="true"}
Chassis "32697bcc-cc6a-4a59-8424-887d20df2d10"
hostname: "ovirt7.hua.gr"
Encap geneve
ip: "10.100.59.49"
options: {csum="true"}
Port_Binding "ff31a88f-23f0-48fe-a657-3cd24d51f69e"
Port_Binding "01da6ee3-abff-4423-954a-c4abf350e390"
Port_Binding "caa870e1-8b6c-48bd-ac61-bfeda8befd10"
Chassis "f60100e6-0ee5-4472-8095-cc48b5160f50"
hostname: "ovirt6.hua.gr"
Encap geneve
ip: "10.100.59.50"
options: {csum="true"}
Chassis "fa7e6cbe-fa7f-46bc-9760-f581725f60a8"
hostname: "ovirt2.hua.gr"
Encap geneve
ip: "10.100.59.54"
options: {csum="true"}
Chassis "58a9412f-bfad-4c98-9882-d7d006588e0b"
hostname: "ovirt9.hua.gr"
Encap geneve
ip: "10.100.59.47"
options: {csum="true"}
Chassis "09dc1148-8ce5-425d-8f93-8dbf43fd7828"
hostname: "ovirt8.hua.gr"
Encap geneve
ip: "10.100.59.48"
options: {csum="true"}
so I guess tunneling is correctly setup right. On the host I am seeing:
[root@ovirt7 ~]# ovs-vsctl show
886fe6e5-13ea-4889-ba35-1ac0a422ca23
Bridge br-int
fail_mode: secure
Port "ovn-1fcca3-0"
Interface "ovn-1fcca3-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.55"}
Port "ovn-0e2259-0"
Interface "ovn-0e2259-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.53"}
Port "ovn-fa7e6c-0"
Interface "ovn-fa7e6c-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.54"}
Port "vnet3"
Interface "vnet3"
Port "vnet0"
Interface "vnet0"
Port "ovn-25d359-0"
Interface "ovn-25d359-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.52"}
Port br-int
Interface br-int
type: internal
Port "ovn-f60100-0"
Interface "ovn-f60100-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.50"}
Port "ovn-47ff30-0"
Interface "ovn-47ff30-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.51"}
Port "ovn-58a941-0"
Interface "ovn-58a941-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.47"}
Port "vnet2"
Interface "vnet2"
Port "ovn-09dc11-0"
Interface "ovn-09dc11-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.100.59.48"}
ovs_version: "2.11.0"
and:
[root@ovirt7 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master trunk state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:32 brd ff:ff:ff:ff:ff:ff
19: eno1.59@eno1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovirtmgmt state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
20: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 2a:92:56:8b:36:b8 brd ff:ff:ff:ff:ff:ff
21: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether c6:70:c8:34:86:85 brd ff:ff:ff:ff:ff:ff
23: br-int: <BROADCAST,MULTICAST> mtu 1442 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether da:49:e1:b0:b2:4e brd ff:ff:ff:ff:ff:ff
25: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
26: eno1.60@eno1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan60 state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
27: vlan60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
28: eno1.61@eno1: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 1500 qdisc noqueue master vlan61 state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
29: vlan61: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
30: trunk: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:17:a4:77:00:30 brd ff:ff:ff:ff:ff:ff
39: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc mq master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:6f:2b:f7:00:17 brd ff:ff:ff:ff:ff:ff
40: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc mq master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:6f:2b:f7:00:18 brd ff:ff:ff:ff:ff:ff
41: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:6f:2b:f7:00:0f brd ff:ff:ff:ff:ff:ff
42: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc mq master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:6f:2b:f7:00:16 brd ff:ff:ff:ff:ff:ff
44: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 06:6c:84:05:bf:6b brd ff:ff:ff:ff:ff:ff
I have not found anything useful in the log files.
Does anyone have any thoughts?
Thanks!
Thomas
4 years, 10 months
Websocket-proxy not working after upgrade to 4.3
by nicolas@devels.es
Hi,
We recently upgraded to 4.3.8 and everything is working fine but the VNC
Console (Browser).
Once I click on "VNC Console (Browser)" on any machine from the VM
Portal, I get a message like this:
Disconnected from Console
Cannot connect to websocket proxy server. Please check your websocket
proxy certificate or ask your administrator for help. For further
information please refer to the console manual.
Press the 'Connect' button to reconnect the console.
Thing is that everything seems ok to me, and I cannot find further error
log about it.
/etc/ovirt-engine/ovirt-websocket-proxy.conf.d/10-setup.conf content is:
PROXY_PORT=6100
SSL_CERTIFICATE=/etc/ssl/certs/fqdn.combined.cert
SSL_KEY=/etc/ssl/private/fqdn.key
FORCE_DATA_VERIFICATION=False
CERT_FOR_DATA_VERIFICATION=/etc/pki/ovirt-engine/certs/engine.cer
SSL_ONLY=True
On a "status" command on ovirt-websocket-proxy I just see:
feb 05 12:23:22 fqdn systemd[1]: Starting oVirt Engine websockets
proxy...
feb 05 12:23:22 fqdn systemd[1]: Started oVirt Engine websockets
proxy.
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO daemonContext:434 Using the following
ciphers: HIGH:!aNULL
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO daemonContext:438 Minimum SSL version
requested: TLSv1.2
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO msg:887 WebSocket server settings:
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO msg:887 - Listen on *:6100
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO msg:887 - Flash security policy
server
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO msg:887 - SSL/TLS support
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO msg:887 - Deny non-SSL/TLS
connections
feb 05 12:23:22 fqdn ovirt-websocket-proxy.py[3314]:
ovirt-websocket-proxy[3314] INFO msg:887 - proxying from *:6100 to
targets generated by str
On the ovirt-engine.log, I just see this information:
2020-02-05 12:29:10,085Z INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-110)
[68218d5b] Running command: SetVmTicketCommand internal: false. Entities
affected : ID: 5bf9a0bb-da18-4d07-87da-759c0b045e28 Type: VMAction
group CONNECT_TO_VM with role type USER
2020-02-05 12:29:10,095Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(default task-110) [68218d5b] START, SetVmTicketVDSCommand(HostName =
kvmr01.fqdn,
SetVmTicketVDSCommandParameters:{hostId='1828d0dc-e953-4d6a-8a95-528bb7aa849a',
vmId='5bf9a0bb-da18-4d07-87da-759c0b045e28', protocol='VNC',
ticket='oVoKEtgmDKnM', validTime='120', userName='user',
userId='66a7a37f-d804-4192-9734-93f01a95dd98',
disconnectAction='LOCK_SCREEN'}), log id: 596fbfb9
2020-02-05 12:29:10,167Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(default task-110) [68218d5b] FINISH, SetVmTicketVDSCommand, return: ,
log id: 596fbfb9
2020-02-05 12:29:10,195Z INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-110) [68218d5b] EVENT_ID: VM_SET_TICKET(164), User
user@domain-authz initiated console session for VM user.fqdn
2020-02-05 12:29:10,308Z INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-110)
[097f6518-5f87-4947-aee6-e76c9b740bcd] Running command:
SetVmTicketCommand internal: false. Entities affected : ID:
5bf9a0bb-da18-4d07-87da-759c0b045e28 Type: VMAction group CONNECT_TO_VM
with role type USER
2020-02-05 12:29:10,316Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(default task-110) [097f6518-5f87-4947-aee6-e76c9b740bcd] START,
SetVmTicketVDSCommand(HostName = kvmr01.fqdn,
SetVmTicketVDSCommandParameters:{hostId='1828d0dc-e953-4d6a-8a95-528bb7aa849a',
vmId='5bf9a0bb-da18-4d07-87da-759c0b045e28', protocol='VNC',
ticket='A7PQWaXupvbZ', validTime='7200', userName='user',
userId='66a7a37f-d804-4192-9734-93f01a95dd98',
disconnectAction='LOCK_SCREEN'}), log id: 71e5165c
2020-02-05 12:29:10,387Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(default task-110) [097f6518-5f87-4947-aee6-e76c9b740bcd] FINISH,
SetVmTicketVDSCommand, return: , log id: 71e5165c
2020-02-05 12:29:10,408Z INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-110) [097f6518-5f87-4947-aee6-e76c9b740bcd] EVENT_ID:
VM_SET_TICKET(164), User user@domain-authz initiated console session for
VM user.fqdn
Please, any tip on how to debug this? I cannot seem to find the reason
for this.
Thanks.
4 years, 10 months
Backup Solution
by Christian Reiss
Hey folks,
Running a 3-way HCI (again (sigh)) on gluster. Now the _inside_ of the
vms is backup'ed seperatly using bareos on an hourly basis, so files are
present with worst case 59 minutes data loss.
Now, on the outside I thought of doing gluster snapshots and then
syncing those .snap dirs away to a remote 10gig connected machine on a
weekly-or-so basis. As those contents of the snaps are the oVirt images
(entire DC) I could re-setup gluster and copy those files back into
gluster and be done with it.
Now some questions, if I may:
- If the hosts remain intact but gluster dies, I simply setup Gluster,
stop the ovirt engine (seperate standalone hardware) copy everything
back and start ovirt engine again. All disks are accessible again
(tested). The bricks are marked as down (new bricks, same name). There
is a "reset brick" button that made the bricks come back online again.
What _exactly_ does it do? Does it reset the brick info in oVirt or copy
all the data over from another node and really, really reset the brick?
- If the hosts remain intact, but the engine dies: Can I re-attach the
engine the the running cluster?
- If hosts and engine dies and everything needs to be re-setup would it
be possible to do the setup wizard(s) again up to a running point then
copy the disk images to the new gluster-dc-data-dir? Would oVirt rescan
the dir for newly found vms?
- If _one_ host dies, but 2 and the engine remain online: Whats the
oVirt way of resetting up the failed one? Reinstalling the node and then
what? From all the cases above this is the most likely one.
Having had to reinstall the entire Cluster three times already scares
me. Always gluster related.
Again thank you community for your great efforts!
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 10 months
Emergency :/ No VMs starting
by Christian Reiss
Hey folks,
oh Jesus. 3-Way HCI. Gluster w/o any issues:
[root@node01:/var/log/glusterfs] # gluster vol info ssd_storage
Volume Name: ssd_storage
Type: Replicate
Volume ID: d84ec99a-5db9-49c6-aab4-c7481a1dc57b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
Brick2: node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
Brick3: node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.strict-o-direct: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: off
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 30
storage.owner-uid: 36
storage.owner-gid: 36
cluster.granular-entry-heal: enab
[root@node01:/var/log/glusterfs] # gluster vol status ssd_storage
Status of volume: ssd_storage
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node01.company.com:/gluster_br
icks/ssd_storage/ssd_storage 49152 0 Y
63488
Brick node02.company.com:/gluster_br
icks/ssd_storage/ssd_storage 49152 0 Y
18860
Brick node03.company.com:/gluster_br
icks/ssd_storage/ssd_storage 49152 0 Y
15262
Self-heal Daemon on localhost N/A N/A Y
63511
Self-heal Daemon on node03.dc-dus.dalason.n
et N/A N/A Y
15285
Self-heal Daemon on 10.100.200.12 N/A N/A Y
18883
Task Status of Volume ssd_storage
------------------------------------------------------------------------------
There are no active volume tasks
[root@node01:/var/log/glusterfs] # gluster vol heal ssd_storage info
Brick node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0
Brick node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0
Brick node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
Status: Connected
Number of entries: 0
And everything is mounted where its supposed to. But no VMs start due to
IO Error. I checked a gluster-based file (CentOS iso) md5 against a
local copy, it matches. One VM at one point managed to start, but failed
subsequent starts. The data/disks seem okay,
/var/log/glusterfs/"rhev-data-center-mnt-glusterSD-node01.company.com:_ssd__storage.log-20200202"
has entries like:
[2020-02-01 23:15:15.449902] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1:
remote operation failed. Path:
/.shard/86da0289-f74f-4200-9284-678e7bd76195.1405
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-02-01 23:15:15.484363] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1:
remote operation failed. Path:
/.shard/86da0289-f74f-4200-9284-678e7bd76195.1400
(00000000-0000-0000-0000-000000000000) [Permission denied]
Before this happened we put one host into maintenance mode, it all
started during migration.
Any help? We're sweating blood here.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 10 months
Huge increase of CPU load on hosted engine VM after upgrade to 4.3.8 and cluster compatibility version
by Florian Schmid
Hi,
I have upgraded yesterday one of our ovirt environments from 4.2.8 to 4.3.8.
We have a hosted engine, 25 hosts and about 150 VMs running there.
After upgrading the cluster compatibility version from 4.2 to 4.3, I have seen a huge increase of CPU load on the engine VM.
The engine had 4 cores (16GB mem) and never had load issues, but after the upgrade, the system has a load of 6 and more. I have increased the CPUs to 8, because working in the UI was impossible.
I have noticed, that the load always increase, when I load the VM UI tab, where now all VMs have the orange triangle with pending changes: Custom compatibility version
The problem I have, is, that I can't reboot now all VMs, maybe in the next 3 or 4 weeks.
Working now with the UI is quite problematic, because of the slowness.
The engine VM was already rebooted, but didn't solve the problem.
What logs do you need and is there a workaround available? I have some more oVirt environments to upgrade, all with even more VMs.
BR Florian
4 years, 10 months
oVirt MAC Pool question
by Vrgotic, Marko
Dear oVirt,
While investigating and DHCP & DDNS collision issues between two VM servers from different oVirt clusters, I noticed that oVirt assigns same default MAC range for each of it’s managed clusters.
Question1: Does oVirt-Engine keep separate place in DB or … for MAC addresses assigned per cluster or it keeps them all in same place?
Question2: Would there be an harming effect on existing VMs if the default mac pool would be changed?
Additional info:
Self Hosted ovirt-engine – 4.3.4 and 4.3.7
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
ActiveVideo
4 years, 10 months
Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"
by Guillaume Pavese
Hi,
Trying to deploy ovirt 4.3-stable Hosted Engine with cockpit
This fails with the following :
[ INFO ] TASK [ovirt.hosted_engine_setup : Set VLAN ID at datacenter level]
[ ERROR ] Exception: Entity 'None' was not found.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Entity
'None' was not found."}
Any idea?
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
4 years, 10 months
Recover VM if engine down
by matteo fedeli
Hi, It's possibile recover a VM if the engine is damaged? the vm is on a data storage domain.
4 years, 10 months
insufficient web panel user permissions to create disk
by Pavel Nakonechnyi
Dear community,
I am trying to achieve the following:
- create a regular user in oVirt environment; [DONE]
- grant full access to a particular VM; [DONE]
- grant privileges to create new VMs; [NOT OK]
What I observe currently:
- user sees his VM on "VM Portal" page and can edit its settings, this is fine;
- user can not suspend the VM with the following error in engine.log:
2020-02-04 13:48:25,473Z INFO [org.ovirt.engine.core.bll.HibernateVmCommand] (default task-95) [d43167ef-894f-4281-9100-578bac65a3bb] Running command: HibernateVmCommand internal: false. Entities affected : ID: 85e560ed-a010-4f95-b4e4-43d2e741b51e Type: VMAction group HIBERNATE_VM with role type USER
2020-02-04 13:48:25,486Z INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-95) [d43167ef-894f-4281-9100-578bac65a3bb] Running command: AddDiskCommand internal: true. Entities affected : ID: 0a2174b2-1e22-41e7-b3c1-48ff22d6486e Type: StorageAction group CREATE_DISK with role type USER
2020-02-04 13:48:25,491Z WARN [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-95) [d43167ef-894f-4281-9100-578bac65a3bb] Validation of action 'AddImageFromScratch' failed for user pavel@internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN,NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
2020-02-04 13:48:25,496Z ERROR [org.ovirt.engine.core.bll.HibernateVmCommand] (default task-95) [d43167ef-894f-4281-9100-578bac65a3bb] Command 'org.ovirt.engine.core.bll.HibernateVmCommand' failed: EngineException: Failed to create disk! vm-pavel_hibernation_memory (Failed with error ENGINE and code 5001)
Similar error can be found here:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/HC33LVIWZPPO...
What permissions have to be granted to a user to be able to create disks?
oVirt engine package version: 4.3.7.2-1.el7
---
WBR, Pavel
4 years, 10 months