Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 6 participants
- 19176 discussions
Hello all,
I wasn't sure where to post this or email this (I promised I looked around), but the web page for the following web page in the manual appears to have HTML issues towards the end of the page:
https://www.ovirt.org/documentation/admin-guide/chap-Pools.html
Should I notify someone else that it ought to be fixed?
2
1
Hi,
I have two Esxi hosts managed by VMware vCenter Server. I want to create a similar infrastructure with oVirt. I know that oVirt is similar to VMware vCenter Server but not sure what to replace the Esxi hosts with in oVirt Environment.
I am looking to build oVirt with Self-Hosted Engine.It would be great help if someone could help me to build this.
3
2
25 Jan '19
Dear Darrell,
I found the issue and now I can reach the maximum of the network with a fuse client.Here is a short overview:1. I noticed that working with a new gluster volume is reaching my network speed - I was quite excited.2. Then I have destroyed my gluster volume and created a new one and started adding features from ovirt
Once I have added features.shard on -> I hit the same performance from before. Increasing the shard size to 16MB didn't help at all.
For my case where I have 2 Virtualization hosts with single data gluster volume - sharding is not neccessary, but for larger setups it will be a problem.
As this looks to me as a bug - can someone tell me where I can report it ?
Thanks to all who guided me in this journey of GlusterFS ! I have learned so much , as my prior knowledge was only in Ceph.
Best Regards,Strahil Nikolov
В четвъртък, 24 януари 2019 г., 17:53:50 ч. Гринуич+2, Darrell Budic <budic(a)onholyground.com> написа:
Strahil-
The fuse client is what it is, it’s limited by operating in user land and waiting for the gluster servers to acknowledge all the writes. I noted you're using ovirt, you should look into enabling the libgfapi engine setting to run your VMs with libgf natively. You can’t test directly from the host with that, but you can run your tests inside the VMs. I saw significant throughput and latency improvements that way. It’s still somewhat beta, so you’ll probably need to search the overt-users mailing list to find info on enabling it.
Good luck!
On Jan 24, 2019, at 4:32 AM, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Dear Amar, Community,
it seems the issue is in the fuse client itself.
Here is the latest update:1. I have added the following:server.event-threads: 4
client.event-threads: 4
performance.stat-prefetch: onperformance.strict-o-direct: off
Results: no change
2. Allowed nfs and connected ovirt1 to the gluster volume:nfs.disable: off
Results: Drastic improvement in performance as follows:
[root@ovirt1 data]# dd if=/dev/zero of=largeio bs=1M count=5000 status=progress
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 53.0443 s, 98.8 MB/s
So I would be happy if anyone guide me in order to fix the situation as the fuse client is the best way to use glusterfs, and it seems the glusterfs-server is not the guilty one.
Thanks in advance for your guidance.I have learned so much.
Best Regards,Strahil Nikolov
От: Strahil <hunter86_bg(a)yahoo.com>
До: Amar Tumballi Suryanarayan <atumball(a)redhat.com>
Копие: Gluster-users <gluster-users(a)gluster.org>
Изпратен: сряда, 23 януари 2019 г. 18:44
Тема: Re: [Gluster-users] Gluster performance issues - need advise
Dear Amar,
Thanks for your email.
Actually my concerns were on both topics.Would you recommend any perf options that will be suitable ?
After mentioning the network usage, I just checked it and it seems duringthe test session, ovirt1 (both client and host) is using no more than 455Mbit/s which is half the network bandwidth.
I'm still in the middle of nowhere, so any ideas are welcome.
Best Regards,Strahil Nikolov
On Jan 23, 2019 17:49, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
I didn't understand the issue properly. Mostly I missed something.
Are you concerned the performance is 49MB/s with and without perf options? or are you expecting it to be 123MB/s as over the n/w you get that speed?
If it is the first problem, then you are actually having 'performance.write-behind on' in both options, and it is the only perf xlator which comes into action during the test you ran.
If it is the second, then please be informed that gluster does client side replication, which means, n/w would be split in half for write operations (like write(), creat() etc), so the number you are getting is almost the maximum with 1GbE.
Regards,Amar
On Wed, Jan 23, 2019 at 8:38 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Hello Community,
recently I have built a new lab based on oVirt and CentOS 7.
During deployment I had some hicups, but now the engine is up and running - but gluster is causing me trouble.
Symptoms: Slow VM install from DVD, poor write performance. The latter has been tested via:
dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data bs=1M count=1000 status=progress
The reported speed is 60MB/s which is way too low for my setup.
My lab design:
https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?usp=…
Gluster version is 3.12.15
So far I have done:
1. Added 'server.allow-insecure on' (with 'option rpc-auth-allow-insecure on' in glusterd.vol)
Volume info after that change:
Volume Name: data
Type: Replicate
Volume ID: 9b06a1e9-8102-4cd7-bc56-84960a1efaa2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/gluster_bricks/data/data
Brick2: ovirt2.localdomain:/gluster_bricks/data/data
Brick3: ovirt3.localdomain:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.allow-insecure: on
Seems no positive or negative effect so far.
2. Tested with tmpfs on all bricks -> ovirt1 mounted gluster volume -> max 60MB/s (bs=1M without 'oflag=direct')
[root@ovirt1 data]# dd if=/dev/zero of=large_io bs=1M count=4000 status=progress
4177526784 bytes (4.2 GB) copied, 70.843409 s, 59.0 MB/s
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 71.1407 s, 59.0 MB/s
[root@ovirt1 data]# rm -f large_io
[root@ovirt1 data]# gluster volume profile data info
Brick: ovirt1.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 131072b+
No. of Reads: 8
No. of Writes: 44968
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
0.00 45.80 us 38.00 us 54.00 us 10 STAT
0.00 227.67 us 216.00 us 242.00 us 3 CREATE
0.00 113.38 us 68.00 us 381.00 us 8 READ
0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
0.00 59.97 us 45.00 us 113.00 us 32 OPEN
0.00 24.41 us 13.00 us 89.00 us 161 INODELK
0.00 43.43 us 28.00 us 214.00 us 93 STATFS
0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
Duration: 380 seconds
Data Read: 1048576 bytes
Data Written: 5894045696 bytes
Interval 0 Stats:
Block Size: 131072b+
No. of Reads: 8
No. of Writes: 44968
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
0.00 45.80 us 38.00 us 54.00 us 10 STAT
0.00 227.67 us 216.00 us 242.00 us 3 CREATE
0.00 113.38 us 68.00 us 381.00 us 8 READ
0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
0.00 59.97 us 45.00 us 113.00 us 32 OPEN
0.00 24.41 us 13.00 us 89.00 us 161 INODELK
0.00 43.43 us 28.00 us 214.00 us 93 STATFS
0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
Duration: 380 seconds
Data Read: 1048576 bytes
Data Written: 5894045696 bytes
Brick: ovirt3.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 1b+
No. of Reads: 0
No. of Writes: 39328
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
0.01 219.50 us 188.00 us 251.00 us 2 CREATE
0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
0.01 62.30 us 38.00 us 119.00 us 10 OPEN
0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
0.01 24.60 us 12.00 us 64.00 us 40 INODELK
0.02 176.30 us 10.00 us 765.00 us 10 READDIR
0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 39328 bytes
Interval 0 Stats:
Block Size: 1b+
No. of Reads: 0
No. of Writes: 39328
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
0.01 219.50 us 188.00 us 251.00 us 2 CREATE
0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
0.01 62.30 us 38.00 us 119.00 us 10 OPEN
0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
0.01 24.60 us 12.00 us 64.00 us 40 INODELK
0.02 176.30 us 10.00 us 765.00 us 10 READDIR
0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 39328 bytes
Brick: ovirt2.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 512b+ 131072b+
No. of Reads: 0 0
No. of Writes: 36 76758
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 6 FORGET
0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
0.00 272.40 us 235.00 us 296.00 us 5 CREATE
0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
0.01 86.69 us 30.00 us 379.00 us 62 STAT
0.01 64.30 us 47.00 us 169.00 us 84 OPEN
0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
0.04 65.59 us 26.00 us 293.00 us 279 STATFS
0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
0.67 91.68 us 12.00 us 1141.00 us 3186 LOOKUP
10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
Duration: 1206 seconds
Data Read: 0 bytes
Data Written: 10060843008 bytes
Interval 0 Stats:
Block Size: 512b+ 131072b+
No. of Reads: 0 0
No. of Writes: 36 76758
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 6 FORGET
0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
0.00 272.40 us 235.00 us 296.00 us 5 CREATE
0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
0.01 86.69 us 30.00 us 379.00 us 62 STAT
0.01 64.30 us 47.00 us 169.00 us 84 OPEN
0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
0.04 65.59 us 26.00 us 293.00 us 279 STATFS
0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
0.67 91.66 us 12.00 us 1141.00 us 3186 LOOKUP
10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
Duration: 1206 seconds
Data Read: 0 bytes
Data Written: 10060843008 bytes
This indicates to me that it's not a problem in Disk/LVM/FileSystem layout.
Most probably I haven't created the volume properly or some option/feature is disabled ?!?
Network shows OK for a gigabit:
[root@ovirt1 data]# dd if=/dev/zero status=progress | nc ovirt2 9999
3569227264 bytes (3.6 GB) copied, 29.001052 s, 123 MB/s^C
7180980+0 records in
7180979+0 records out
3676661248 bytes (3.7 GB) copied, 29.8739 s, 123 MB/s
I'm looking for any help... you can share your volume info also.
Thanks in advance.
Best Regards,
Strahil Nikolov
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Amar Tumballi (amarts)
Dear Amar,
Thanks for your email.
Actually my concerns were on both topics.
Would you recommend any perf options that will be suitable ?
After mentioning the network usage, I just checked it and it seems duringthe test session, ovirt1 (both client and host) is using no more than 455Mbit/s which is half the network bandwidth.
I'm still in the middle of nowhere, so any ideas are welcome.
Best Regards,
Strahil Nikolov
On Jan 23, 2019 17:49, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
>
> I didn't understand the issue properly. Mostly I missed something.
>
> Are you concerned the performance is 49MB/s with and without perf options? or are you expecting it to be 123MB/s as over the n/w you get that speed?
>
> If it is the first problem, then you are actually having 'performance.write-behind on' in both options, and it is the only perf xlator which comes into action during the test you ran.
>
> If it is the second, then please be informed that gluster does client side replication, which means, n/w would be split in half for write operations (like write(), creat() etc), so the number you are getting is almost the maximum with 1GbE.
>
> Regards,
> Amar
>
> On Wed, Jan 23, 2019 at 8:38 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hello Community,
>>
>> recently I have built a new lab based on oVirt and CentOS 7.
>> During deployment I had some hicups, but now the engine is up and running - but gluster is causing me trouble.
>>
>> Symptoms: Slow VM install from DVD, poor write performance. The latter has been tested via:
>> dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data bs=1M count=1000 status=progress
>>
>> The reported speed is 60MB/s which is way too low for my setup.
>>
>> My lab design:
>> https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?usp=…
>> Gluster version is 3.12.15
>>
>> So far I have done:
>>
>> 1. Added 'server.allow-insecure on' (with 'option rpc-auth-allow-insecure on' in glusterd.vol)
>> Volume info after that change:
>>
>> Volume Name: data
>> Type: Replicate
>> Volume ID: 9b06a1e9-8102-4cd7-bc56-84960a1efaa2
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1.localdomain:/gluster_bricks/data/data
>> Brick2: ovirt2.localdomain:/gluster_bricks/data/data
>> Brick3: ovirt3.localdomain:/gluster_bricks/data/data (arbiter)
>> Options Reconfigured:
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>> server.allow-insecure: on
>>
>> Seems no positive or negative effect so far.
>>
>> 2. Tested with tmpfs on all bricks -> ovirt1 mounted gluster volume -> max 60MB/s (bs=1M without 'oflag=direct')
>>
>>
>> [root@ovirt1 data]# dd if=/dev/zero of=large_io bs=1M count=4000 status=progress
>> 4177526784 bytes (4.2 GB) copied, 70.843409 s, 59.0 MB/s
>> 4000+0 records in
>> 4000+0 records out
>> 4194304000 bytes (4.2 GB) copied, 71.1407 s, 59.0 MB/s
>> [root@ovirt1 data]# rm -f large_io
>> [root@ovirt1 data]# gluster volume profile data info
>> Brick: ovirt1.localdomain:/gluster_bricks/data/data
>> ---------------------------------------------------
>> Cumulative Stats:
>> Block Size: 131072b+
>> No. of Reads: 8
>> No. of Writes: 44968
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 3 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
>> 0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
>> 0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
>> 0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
>> 0.00 45.80 us 38.00 us 54.00 us 10 STAT
>> 0.00 227.67 us 216.00 us 242.00 us 3 CREATE
>> 0.00 113.38 us 68.00 us 381.00 us 8 READ
>> 0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
>> 0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
>> 0.00 59.97 us 45.00 us 113.00 us 32 OPEN
>> 0.00 24.41 us 13.00 us 89.00 us 161 INODELK
>> 0.00 43.43 us 28.00 us 214.00 us 93 STATFS
>> 0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
>> 0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
>> 0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
>> 0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
>> 0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
>> 0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
>> 0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
>> 1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
>> 96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
>>
>> Duration: 380 seconds
>> Data Read: 1048576 bytes
>> Data Written: 5894045696 bytes
>>
>> Interval 0 Stats:
>> Block Size: 131072b+
>> No. of Reads: 8
>> No. of Writes: 44968
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 3 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
>> 0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
>> 0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
>> 0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
>> 0.00 45.80 us 38.00 us 54.00 us 10 STAT
>> 0.00 227.67 us 216.00 us 242.00 us 3 CREATE
>> 0.00 113.38 us 68.00 us 381.00 us 8 READ
>> 0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
>> 0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
>> 0.00 59.97 us 45.00 us 113.00 us 32 OPEN
>> 0.00 24.41 us 13.00 us 89.00 us 161 INODELK
>> 0.00 43.43 us 28.00 us 214.00 us 93 STATFS
>> 0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
>> 0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
>> 0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
>> 0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
>> 0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
>> 0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
>> 0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
>> 1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
>> 96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
>>
>> Duration: 380 seconds
>> Data Read: 1048576 bytes
>> Data Written: 5894045696 bytes
>>
>> Brick: ovirt3.localdomain:/gluster_bricks/data/data
>> ---------------------------------------------------
>> Cumulative Stats:
>> Block Size: 1b+
>> No. of Reads: 0
>> No. of Writes: 39328
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 2 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
>> 0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
>> 0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
>> 0.01 219.50 us 188.00 us 251.00 us 2 CREATE
>> 0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
>> 0.01 62.30 us 38.00 us 119.00 us 10 OPEN
>> 0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
>> 0.01 24.60 us 12.00 us 64.00 us 40 INODELK
>> 0.02 176.30 us 10.00 us 765.00 us 10 READDIR
>> 0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
>> 0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
>> 0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
>> 0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
>> 28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
>> 29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
>> 40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
>>
>> Duration: 189 seconds
>> Data Read: 0 bytes
>> Data Written: 39328 bytes
>>
>> Interval 0 Stats:
>> Block Size: 1b+
>> No. of Reads: 0
>> No. of Writes: 39328
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 2 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
>> 0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
>> 0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
>> 0.01 219.50 us 188.00 us 251.00 us 2 CREATE
>> 0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
>> 0.01 62.30 us 38.00 us 119.00 us 10 OPEN
>> 0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
>> 0.01 24.60 us 12.00 us 64.00 us 40 INODELK
>> 0.02 176.30 us 10.00 us 765.00 us 10 READDIR
>> 0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
>> 0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
>> 0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
>> 0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
>> 28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
>> 29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
>> 40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
>>
>> Duration: 189 seconds
>> Data Read: 0 bytes
>> Data Written: 39328 bytes
>>
>> Brick: ovirt2.localdomain:/gluster_bricks/data/data
>> ---------------------------------------------------
>> Cumulative Stats:
>> Block Size: 512b+ 131072b+
>> No. of Reads: 0 0
>> No. of Writes: 36 76758
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 6 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
>> 0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
>> 0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
>> 0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
>> 0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
>> 0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
>> 0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
>> 0.00 272.40 us 235.00 us 296.00 us 5 CREATE
>> 0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
>> 0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
>> 0.01 86.69 us 30.00 us 379.00 us 62 STAT
>> 0.01 64.30 us 47.00 us 169.00 us 84 OPEN
>> 0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
>> 0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
>> 0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
>> 0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
>> 0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
>> 0.04 65.59 us 26.00 us 293.00 us 279 STATFS
>> 0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
>> 0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
>> 0.67 91.68 us 12.00 us 1141.00 us 3186 LOOKUP
>> 10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
>> 11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
>> 19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
>> 25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
>> 32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
>>
>> Duration: 1206 seconds
>> Data Read: 0 bytes
>> Data Written: 10060843008 bytes
>>
>> Interval 0 Stats:
>> Block Size: 512b+ 131072b+
>> No. of Reads: 0 0
>> No. of Writes: 36 76758
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 6 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
>> 0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
>> 0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
>> 0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
>> 0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
>> 0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
>> 0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
>> 0.00 272.40 us 235.00 us 296.00 us 5 CREATE
>> 0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
>> 0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
>> 0.01 86.69 us 30.00 us 379.00 us 62 STAT
>> 0.01 64.30 us 47.00 us 169.00 us 84 OPEN
>> 0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
>> 0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
>> 0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
>> 0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
>> 0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
>> 0.04 65.59 us 26.00 us 293.00 us 279 STATFS
>> 0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
>> 0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
>> 0.67 91.66 us 12.00 us 1141.00 us 3186 LOOKUP
>> 10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
>> 11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
>> 19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
>> 25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
>> 32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
>>
>> Duration: 1206 seconds
>> Data Read: 0 bytes
>> Data Written: 10060843008 bytes
>>
>>
>>
>> This indicates to me that it's not a problem in Disk/LVM/FileSystem layout.
>>
>> Most probably I haven't created the volume properly or some option/feature is disabled ?!?
>> Network shows OK for a gigabit:
>> [root@ovirt1 data]# dd if=/dev/zero status=progress | nc ovirt2 9999
>> 3569227264 bytes (3.6 GB) copied, 29.001052 s, 123 MB/s^C
>> 7180980+0 records in
>> 7180979+0 records out
>> 3676661248 bytes (3.7 GB) copied, 29.8739 s, 123 MB/s
>>
>>
>> I'm looking for any help... you can share your volume info also.
>>
>> Thanks in advance.
>>
>> Best Regards,
>> Strahil Nikolov
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users(a)gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
1
0
[ANN] oVirt 4.3.0 Third Release Candidate is now available for testing
by Sandro Bonazzola 25 Jan '19
by Sandro Bonazzola 25 Jan '19
25 Jan '19
The oVirt Project is pleased to announce the availability of the Third
Release Candidate of oVirt 4.3.0, as of January 24th, 2018
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first release candidate of the 4.3.0 version.
This release brings more than 130 enhancements and more than 450 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).
- oVirt Node NG is already available for CentOS 7
- oVirt Node NG for Fedora 28 (tech preview) is being delayed due to build
issues with the build system.
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
2
2
Hi Martin,
this is my history (please keep in mind that it might get distorted due to mail client).Note: I didn't stop the ovirt-engine.service and this caused some errors to be logged - but the engine is still working without issues. As I said - this is my test lab and I was willing to play around :)
Good Luck!
ssh root@engine
#Switch to postgre usersu - postgres
#If you don't load this , there will be no path for psql , nor it will start at allsource /opt/rh/rh-postgresql95/enable
#open the DB. psql engine
#Commands in the DB:select id, storage_name from storage_domain_static;
select storage_domain_id, ovf_disk_id from storage_domains_ovf_info where storag e_domain_id='fbe7bf1a-2f03-4311-89fa-5031eab638bf';
delete from storage_domain_dynamic where id = 'fbe7bf1a-2f03-4311-89fa-5031eab63 8bf';
delete from storage_domain_static where id = 'fbe7bf1a-2f03-4311-89fa-5031eab638 bf';
delete from base_disks where disk_id = '7a155ede-5317-4860-aa93-de1dc283213e';delete from base_disks where disk_id = '7dedd0e1-8ce8-444e-8a3d-117c46845bb0';
delete from storage_domains_ovf_info where storage_domain_id = 'fbe7bf1a-2f03-43 11-89fa-5031eab638bf';
delete from storage_pool_iso_map where storage_id = 'fbe7bf1a-2f03-4311-89fa-503 1eab638bf';
#I think this shows all tables:select table_schema ,table_name from information_schema.tables order by table_sc hema,table_name;#Maybe you don't need this one and you need to find the NFS volume:select * from gluster_volumes ;delete from gluster_volumes where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
select table_schema ,table_name from information_schema.tables order by table_sc hema,table_name;
# The previous delete failed as there was an entry in storage_server_connections.#In your case could be differentselect * from storage_server_connections;delete from storage_server_connections where id = '490ee1c7-ae29-45c0-bddd-61708 22c8490';delete from gluster_volumes where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
Best Regards,Strahil Nikolov
В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj <mhumaj(a)gmail.com> написа:
Hi StrahilI have tried to use the same ip and nfs export to replace the original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate it. This is a test system.
thank you Martin
On Fri, Jan 25, 2019 at 9:56 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
Can you create a temporary NFS server which to be accessed during the removal?I have managed to edit the engine's DB to get rid of cluster domain, but this is not recommended for production systems :)
1
0
Ovirt - 4.2.4.5-1.el7
Is there any way how to remove the nfs ISO domain in DB? We cannot get rid of it in GUI and we are not able to use it anymore. The problem is that NFS server which was responsible for DATA TYPE ISO domain was deleted. Even we are trying to change it in settings it will not allow us to do it.
Error messages:
Failed to activate Storage Domain oVirt-ISO (Data Center InnovationCenter) by admin@internal-authz
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'61045461-10ff-4f7a-b464-67198c4a6c27',)
tank you
1
0
On Thu, Jan 24, 2019 at 3:20 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> no...
>
> all logs in that folder are attached in the mail before.
>
OK, unfortunately in this case I can just suggest to retry and, when it
reaches
[ INFO ] TASK [Check engine VM health]
try to connect to the engine VM via ssh and check what's happening there to
ovirt-engine
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 15:16:52
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> The hosted engine is not running and cannot be started.
>
>
>
> Do you have on your first host a directory
> like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
> with logs from the engine VM?
>
>
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHFWW…
> >
>
>
1
0
On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> The hosted engine is not running and cannot be started.
>
>
>
Do you have on your first host a directory
like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
with logs from the engine VM?
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHFWW…
> >
>
>
2
1
On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
It's still the same issue: the host fail to properly check the status of
the engine over a dedicate health page.
You should connect to ovirt-hci.res01.ads.ooe.local and check the status of
ovirt-engine service and /var/log/ovirt-engine/engine.log there.
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyper…
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/roles…
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHFWW…
> >
>
>
2
1
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
1
0