Note about oVirt and Let's Encrypt SSL certs
by Chris Adams
I was setting up a new oVirt cluster yesterday, and deployed a Let's
Encrypt SSL cert on it for the website. After that, I noticed that
oVirt was getting errors synchronizing networks with ovirt-provider-ovn.
It appears that the python library used for SSL by ovirt-provider-ovn
has the same issue as older OpenSSL versions, and can't handle the
default Let's Encrypt root cert path; the path used for old Android
compatibility can end with an expired cert that's still in the CA store
(even though there's another verification path that doesn't end with an
expired cert).
The solution was to switch the Let's Encrypt cert to the "ISRG Root X1"
chain (which is fine, since I don't log in to oVirt from Android 7
devices).
Just an FYI for anyone else using a Let's Encrypt cert (or other cert
with a similar expired root path, they aren't the only one).
--
Chris Adams <cma(a)cmadams.net>
2 years, 10 months
Re: Gluster Performance issues
by Alex Morrison
Hello Sunil,
[root@ovirt1 ~]# gluster --version
glusterfs 8.6
same on all hosts
On Wed, Feb 23, 2022 at 5:24 AM Sunil Kumar Heggodu Gopala Acharya <
sheggodu(a)redhat.com> wrote:
> Hi,
>
> Which version of gluster is in use?
>
> Regards,
>
> Sunil kumar Acharya
>
> Red Hat
>
> <https://www.redhat.com>
>
> T: +91-8067935170 <http://redhatemailsignature-marketing.itos.redhat.com/>
>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>
>
> On Wed, Feb 23, 2022 at 2:17 PM Alex Morrison <alex(a)discoverygarden.ca>
> wrote:
>
>> Hello All,
>>
>> We have 3 servers with a raid 50 array each, we are having extreme
>> performance issues with our gluster, writes on gluster seem to take at
>> least 3 times longer than on the raid directly. Can this be improved? I've
>> read through several other performance issues threads but have been unable
>> to make any improvements
>>
>> "gluster volume info" and "gluster volume profile vmstore info" is below
>>
>>
>> =========================================================================================================================
>>
>> -Inside Gluster - test took 35+ hours:
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> 600G -n 0 -m TEST -f -b -u root
>> Using uid:0, gid:0.
>> Writing intelligently...done
>> Rewriting...done
>> Reading intelligently...done
>> start 'em...done...done...done...done...done...
>> Version 1.98 ------Sequential Output------ --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Name:Size etc /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
>> /sec %CP
>> TEST 600G 35.7m 17 5824k 7 112m 13
>> 182.7 6
>> Latency 5466ms 12754ms 3499ms
>> 1589ms
>>
>>
>> 1.98,1.98,TEST,1,1644359706,600G,,8192,5,,,36598,17,5824,7,,,114950,13,182.7,6,,,,,,,,,,,,,,,,,,,5466ms,12754ms,,3499ms,1589ms,,,,,,
>>
>>
>> =========================================================================================================================
>>
>> -Outside Gluster - test took 18 minutes:
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# bonnie++ -d . -s
>> 600G -n 0 -m TEST -f -b -u root
>> Using uid:0, gid:0.
>> Writing intelligently...done
>> Rewriting...done
>> Reading intelligently...done
>> start 'em...done...done...done...done...done...
>> Version 1.98 ------Sequential Output------ --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>> Name:Size etc /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
>> /sec %CP
>> TEST 600G 567m 78 149m 30 307m 37
>> 83.0 57
>> Latency 205ms 4630ms 1450ms
>> 679ms
>>
>>
>> 1.98,1.98,TEST,1,1648288012,600G,,8192,5,,,580384,78,152597,30,,,314533,37,83.0,57,,,,,,,,,,,,,,,,,,,205ms,4630ms,,1450ms,679ms,,,,,,
>>
>>
>> =========================================================================================================================
>>
>> [root@ovirt1 1801ed24-5b55-4431-9813-496143367f66]# gluster volume info
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 7ed15c5a-f054-450c-bac9-3ad1b4e5931b
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-storage.dgi:/gluster_bricks/engine/engine
>> Brick2: ovirt2-storage.dgi:/gluster_bricks/engine/engine
>> Brick3: ovirt3-storage.dgi:/gluster_bricks/engine/engine
>> Options Reconfigured:
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 10000
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: on
>> performance.client-io-threads: on
>> diagnostics.latency-measurement: on
>> diagnostics.count-fop-hits: on
>>
>> Volume Name: vmstore
>> Type: Replicate
>> Volume ID: 2670ff29-8d43-4610-a437-c6ec2c235753
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Brick2: ovirt2-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Brick3: ovirt3-storage.dgi:/gluster_bricks/vmstore/vmstore
>> Options Reconfigured:
>> cluster.granular-entry-heal: enable
>> performance.strict-o-direct: on
>> network.ping-timeout: 20
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> server.event-threads: 4
>> client.event-threads: 4
>> cluster.choose-local: off
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qlength: 10000
>> cluster.shd-max-threads: 8
>> cluster.locking-scheme: granular
>> cluster.data-self-heal-algorithm: full
>> cluster.server-quorum-type: server
>> cluster.quorum-type: auto
>> cluster.eager-lock: enable
>> network.remote-dio: off
>> performance.low-prio-threads: 32
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> transport.address-family: inet
>> storage.fips-mode-rchecksum: on
>> nfs.disable: on
>> performance.client-io-threads: on
>> diagnostics.latency-measurement: on
>> diagnostics.count-fop-hits: on
>> server.tcp-user-timeout: 20
>> server.keepalive-time: 10
>> server.keepalive-interval: 2
>> server.keepalive-count: 5
>> cluster.lookup-optimize: off
>>
>>
>> =========================================================================================================================
>>
>> [root@ovirt1 ~]# gluster volume profile vmstore info
>> Brick: ovirt1-storage.dgi:/gluster_bricks/vmstore/vmstore
>> ---------------------------------------------------------
>> Cumulative Stats:
>> Block Size: 1b+ 4b+
>> 8b+
>> No. of Reads: 0 0
>> 0
>> No. of Writes: 4021 2
>> 64
>>
>> Block Size: 64b+ 128b+
>> 256b+
>> No. of Reads: 0 134
>> 405
>> No. of Writes: 2 32
>> 119
>>
>> Block Size: 512b+ 1024b+
>> 2048b+
>> No. of Reads: 2890 2617
>> 3059
>> No. of Writes: 113881 93551
>> 131776
>>
>> Block Size: 4096b+ 8192b+
>> 16384b+
>> No. of Reads: 1096597 164931
>> 181569
>> No. of Writes: 41091001 20499776
>> 3230472
>>
>> Block Size: 32768b+ 65536b+
>> 131072b+
>> No. of Reads: 212970 96574
>> 244017
>> No. of Writes: 1304688 991154
>> 3412866
>>
>> Block Size: 262144b+ 524288b+
>> 1048576b+
>> No. of Reads: 0 0
>> 0
>> No. of Writes: 15 18
>> 65166
>>
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls
>> Fop
>> --------- ----------- ----------- ----------- ------------
>> ----
>> 0.00 0.00 us 0.00 us 0.00 us 6396
>> FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 189759
>> RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 31292
>> RELEASEDIR
>> 0.00 159.99 us 44.37 us 278.66 us 79
>> STAT
>> 0.00 364.42 us 174.87 us 644.45 us 96
>> LINK
>> 0.00 2714.10 us 203.53 us 47397.28 us 26
>> RMDIR
>> 0.00 688.26 us 72.39 us 53857.63 us 186
>> SETXATTR
>> 0.00 974.01 us 271.02 us 27298.52 us 176
>> RENAME
>> 0.00 5254.64 us 159.48 us 84454.46 us 35
>> DISCARD
>> 0.00 1133.84 us 86.47 us 87096.92 us 186
>> REMOVEXATTR
>> 0.00 7907.47 us 221.82 us 182276.21 us 29
>> FALLOCATE
>> 0.00 8304.51 us 311.89 us 42725.81 us 34
>> MKDIR
>> 0.00 5758.19 us 296.39 us 21950.99 us 81
>> TRUNCATE
>> 0.00 401.16 us 23.68 us 132668.93 us 3621
>> LK
>> 0.00 545.48 us 14.22 us 1076517.76 us 6796
>> READDIR
>> 0.00 2250.78 us 336.61 us 1071764.61 us 4257
>> CREATE
>> 0.00 531.76 us 1.35 us 1723046.94 us 31292
>> OPENDIR
>> 0.01 2058.67 us 196.74 us 1587752.27 us 21986
>> MKNOD
>> 0.01 1241.28 us 22.61 us 1548642.70 us 52674
>> READDIRP
>> 0.01 67508.32 us 100.01 us 3002069.81 us 1100
>> FTRUNCATE
>> 0.01 412.74 us 10.94 us 5111334.35 us 209731
>> FLUSH
>> 0.02 78668.59 us 183.06 us 3012629.87 us 1222
>> SETATTR
>> 0.03 761.13 us 21.13 us 4611854.15 us 249094
>> STATFS
>> 0.03 1132.33 us 45.51 us 3159531.87 us 185654
>> OPEN
>> 0.03 2010.33 us 14.43 us 3158098.85 us 107824
>> GETXATTR
>> 0.04 2765.29 us 61.38 us 3159208.13 us 100484
>> XATTROP
>> 0.13 1367.04 us 51.46 us 8450420.87 us 595299
>> FSTAT
>> 0.46 157440.90 us 47.62 us 13405712.93 us 18218
>> UNLINK
>> 0.60 1274.67 us 19.60 us 11985005.14 us 2980806
>> LOOKUP
>> 1.67 16645.99 us 12.80 us 6869488.64 us 631331
>> ENTRYLK
>> 4.10 7504.86 us 39.52 us 11749528.82 us 3430506
>> READ
>> 4.31 56942.36 us 16.57 us 23654127.65 us 475563
>> INODELK
>> 6.09 660.89 us 11.02 us 20571410.32 us 57908331
>> FINODELK
>> 20.59 1823.35 us 63.26 us 23639762.45 us 70934143
>> WRITE
>> 25.70 2680.82 us 35.37 us 13636394.75 us 60223005
>> FXATTROP
>> 36.14 3164.86 us 52.35 us 16805886.48 us 71739477
>> FSYNC
>> 0.00 0.00 us 0.00 us 0.00 us 78458
>> UPCALL
>>
>> Duration: 404667 seconds
>> Data Read: 59850935724 bytes
>> Data Written: 1070158035683 bytes
>>
>> Interval 43 Stats:
>> Block Size: 1b+ 256b+
>> 512b+
>> No. of Reads: 0 14
>> 3
>> No. of Writes: 12 4
>> 252
>>
>> Block Size: 1024b+ 2048b+
>> 4096b+
>> No. of Reads: 0 0
>> 8105
>> No. of Writes: 100 34
>> 147355
>>
>> Block Size: 8192b+ 16384b+
>> 32768b+
>> No. of Reads: 1559 1630
>> 1494
>> No. of Writes: 86013 17318
>> 8687
>>
>> Block Size: 65536b+ 131072b+
>> No. of Reads: 532 764
>> No. of Writes: 8617 36317
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls
>> Fop
>> --------- ----------- ----------- ----------- ------------
>> ----
>> 0.00 0.00 us 0.00 us 0.00 us 16
>> FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 665
>> RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 52
>> RELEASEDIR
>> 0.00 181.75 us 181.75 us 181.75 us 1
>> STAT
>> 0.00 914.55 us 858.55 us 970.56 us 2
>> TRUNCATE
>> 0.00 792.11 us 681.24 us 1051.32 us 4
>> RENAME
>> 0.00 374.44 us 31.86 us 792.10 us 20
>> READDIR
>> 0.00 399.48 us 306.97 us 562.69 us 28
>> UNLINK
>> 0.00 752.40 us 636.14 us 907.29 us 16
>> CREATE
>> 0.00 196.62 us 33.27 us 2586.77 us 336
>> GETXATTR
>> 0.00 1107.98 us 288.51 us 29692.53 us 80
>> MKNOD
>> 0.00 1074.80 us 74.41 us 2121.03 us 126
>> READDIRP
>> 0.00 3119.52 us 3.17 us 157121.44 us 52
>> OPENDIR
>> 0.01 972.14 us 21.72 us 329342.21 us 635
>> FLUSH
>> 0.03 2122.15 us 31.97 us 377538.16 us 732
>> STATFS
>> 0.06 4478.89 us 66.09 us 521289.10 us 649
>> OPEN
>> 0.10 10273.00 us 88.19 us 478524.77 us 472
>> XATTROP
>> 0.16 6969.92 us 16.14 us 385156.98 us 1138
>> ENTRYLK
>> 0.20 4635.40 us 64.29 us 1098319.92 us 2097
>> FSTAT
>> 0.65 4098.07 us 46.85 us 919617.66 us 7797
>> LOOKUP
>> 2.36 87465.33 us 34.75 us 1224052.07 us 1317
>> INODELK
>> 4.37 13897.55 us 57.89 us 1984970.27 us 15336
>> READ
>> 5.24 1265.46 us 16.22 us 1638678.32 us 202165
>> FINODELK
>> 21.05 3370.28 us 144.44 us 2264130.49 us 304724
>> WRITE
>> 31.00 6411.68 us 39.84 us 2190355.42 us 235969
>> FXATTROP
>> 34.75 6107.95 us 60.26 us 2270252.44 us 277643
>> FSYNC
>> 0.00 0.00 us 0.00 us 0.00 us 368
>> UPCALL
>>
>> Duration: 1132 seconds
>> Data Read: 292367566 bytes
>> Data Written: 7495339692 bytes
>>
>> Brick: ovirt2-storage.dgi:/gluster_bricks/vmstore/vmstore
>> ---------------------------------------------------------
>> Cumulative Stats:
>> Block Size: 1b+ 4b+
>> 8b+
>> No. of Reads: 0 0
>> 0
>> No. of Writes: 4802 2
>> 64
>>
>> Block Size: 64b+ 128b+
>> 256b+
>> No. of Reads: 0 138
>> 828
>> No. of Writes: 2 32
>> 131
>>
>> Block Size: 512b+ 1024b+
>> 2048b+
>> No. of Reads: 3595 4590
>> 6161
>> No. of Writes: 128976 99680
>> 132413
>>
>> Block Size: 4096b+ 8192b+
>> 16384b+
>> No. of Reads: 930031 128995
>> 150468
>> No. of Writes: 48357551 23957030
>> 3689474
>>
>> Block Size: 32768b+ 65536b+
>> 131072b+
>> No. of Reads: 154039 71949
>> 790010
>> No. of Writes: 1438859 1052045
>> 3715427
>>
>> Block Size: 262144b+ 524288b+
>> 1048576b+
>> No. of Reads: 9 11
>> 40269
>> No. of Writes: 5 20
>> 50757
>>
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls
>> Fop
>> --------- ----------- ----------- ----------- ------------
>> ----
>> 0.00 0.00 us 0.00 us 0.00 us 7629
>> FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 228577
>> RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 41524
>> RELEASEDIR
>> 0.00 374.22 us 219.35 us 574.87 us 96
>> LINK
>> 0.00 1712.22 us 164.90 us 22337.79 us 41
>> DISCARD
>> 0.00 2840.25 us 245.24 us 46839.63 us 26
>> RMDIR
>> 0.00 3660.92 us 229.86 us 86252.56 us 35
>> FALLOCATE
>> 0.00 876.48 us 297.53 us 16685.63 us 188
>> RENAME
>> 0.00 6660.10 us 379.21 us 52962.79 us 34
>> MKDIR
>> 0.00 957.48 us 80.36 us 218059.88 us 355
>> REMOVEXATTR
>> 0.00 1234.85 us 76.50 us 322955.27 us 355
>> SETXATTR
>> 0.00 2326.62 us 84.07 us 250041.25 us 196
>> STAT
>> 0.00 9660.45 us 289.50 us 295106.01 us 87
>> TRUNCATE
>> 0.00 486.10 us 24.12 us 869966.79 us 5260
>> LK
>> 0.00 453.69 us 18.32 us 426653.18 us 8135
>> READDIR
>> 0.00 1832.38 us 296.61 us 261299.81 us 5050
>> CREATE
>> 0.00 386.77 us 1.31 us 1300836.12 us 41524
>> OPENDIR
>> 0.00 712.06 us 29.85 us 1441115.02 us 34874
>> READDIRP
>> 0.01 1469.51 us 231.67 us 440065.71 us 22845
>> MKNOD
>> 0.01 44229.36 us 75.58 us 1803963.42 us 871
>> FTRUNCATE
>> 0.01 84433.20 us 149.47 us 3614869.24 us 1001
>> SETATTR
>> 0.02 418.11 us 13.39 us 3515000.81 us 243820
>> FLUSH
>> 0.03 1108.47 us 14.77 us 1658743.41 us 130647
>> GETXATTR
>> 0.03 562.86 us 26.74 us 5043949.33 us 297799
>> STATFS
>> 0.03 845.50 us 48.16 us 1998938.03 us 223680
>> OPEN
>> 0.05 2236.13 us 56.08 us 5295998.22 us 120682
>> XATTROP
>> 0.16 890.82 us 43.51 us 3653292.88 us 1012585
>> FSTAT
>> 0.34 2974.20 us 12.92 us 7782497.74 us 642555
>> ENTRYLK
>> 0.47 135831.21 us 70.35 us 11033867.90 us 19800
>> UNLINK
>> 0.57 954.05 us 21.50 us 4553393.38 us 3414605
>> LOOKUP
>> 1.79 14461.54 us 13.45 us 32841452.98 us 702915
>> INODELK
>> 5.14 8014.78 us 40.70 us 5439109.56 us 3644063
>> READ
>> 5.37 443.04 us 11.53 us 32863652.53 us 68909131
>> FINODELK
>> 22.31 1780.31 us 33.59 us 11318712.62 us 71235991
>> FXATTROP
>> 22.83 1571.16 us 74.86 us 32615055.19 us 82622840
>> WRITE
>> 40.84 2762.75 us 52.77 us 8859115.35 us 84039509
>> FSYNC
>> 0.00 0.00 us 0.00 us 0.00 us 95492
>> UPCALL
>>
>> Duration: 484169 seconds
>> Data Read: 167149718723 bytes
>> Data Written: 1177141649872 bytes
>>
>> Interval 43 Stats:
>> Block Size: 1b+ 256b+
>> 512b+
>> No. of Reads: 0 6
>> 4
>> No. of Writes: 12 4
>> 252
>>
>> Block Size: 1024b+ 2048b+
>> 4096b+
>> No. of Reads: 0 0
>> 5668
>> No. of Writes: 100 34
>> 147357
>>
>> Block Size: 8192b+ 16384b+
>> 32768b+
>> No. of Reads: 1178 783
>> 1215
>> No. of Writes: 86014 17318
>> 8687
>>
>> Block Size: 65536b+ 131072b+
>> No. of Reads: 264 4109
>> No. of Writes: 8617 36317
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls
>> Fop
>> --------- ----------- ----------- ----------- ------------
>> ----
>> 0.00 0.00 us 0.00 us 0.00 us 16
>> FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 665
>> RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 52
>> RELEASEDIR
>> 0.00 866.18 us 849.57 us 882.78 us 2
>> TRUNCATE
>> 0.00 1016.14 us 868.35 us 1182.48 us 4
>> RENAME
>> 0.00 455.65 us 41.13 us 1679.37 us 20
>> READDIR
>> 0.00 373.33 us 173.54 us 538.40 us 28
>> UNLINK
>> 0.00 722.01 us 635.91 us 853.71 us 16
>> CREATE
>> 0.00 335.52 us 38.15 us 1381.97 us 54
>> READDIRP
>> 0.00 39218.92 us 214.34 us 78223.50 us 2
>> STAT
>> 0.00 1730.77 us 2.59 us 84964.69 us 52
>> OPENDIR
>> 0.00 1304.96 us 393.64 us 29724.09 us 80
>> MKNOD
>> 0.01 422.75 us 22.85 us 134336.60 us 635
>> FLUSH
>> 0.01 699.21 us 37.54 us 137813.64 us 732
>> STATFS
>> 0.01 468.63 us 23.71 us 260961.83 us 1141
>> ENTRYLK
>> 0.04 2041.83 us 69.80 us 163743.98 us 649
>> OPEN
>> 0.08 1253.65 us 72.52 us 290508.15 us 2354
>> FSTAT
>> 0.09 6913.19 us 69.01 us 212479.24 us 472
>> XATTROP
>> 0.13 3558.26 us 32.70 us 195896.10 us 1317
>> INODELK
>> 0.13 9793.37 us 36.11 us 212755.58 us 499
>> GETXATTR
>> 0.34 1615.15 us 55.42 us 711310.28 us 7797
>> LOOKUP
>> 4.92 13025.92 us 59.83 us 1061585.06 us 13884
>> READ
>> 5.26 955.95 us 16.81 us 1069148.49 us 202174
>> FINODELK
>> 23.28 2809.44 us 137.21 us 1134076.13 us 304713
>> WRITE
>> 26.46 4124.05 us 43.30 us 1142167.27 us 235954
>> FXATTROP
>> 39.23 5195.62 us 65.47 us 1250469.35 us 277632
>> FSYNC
>> 0.00 0.00 us 0.00 us 0.00 us 365
>> UPCALL
>>
>> Duration: 1132 seconds
>> Data Read: 668951066 bytes
>> Data Written: 7495356076 bytes
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G3G5HNX3P2Z...
>>
>
2 years, 10 months
[Call to Action] oVirt 4.5 localization updated in zanata
by Sandro Bonazzola
Hi, https://zanata.ovirt.org/ has been updated with new text to be translated.
If you want to help with this, please follow
https://ovirt.org/develop/localization.html
The ovirt-web-ui and ovirt-engine-ui-extensions projects lack Italian,
Czech and Turkish translation.
The oVirt engine project needs some touch on all the translations but
especially on Italian, Czech, Russian and Turkish
Thanks,
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA
sbonazzo(a)redhat.com
Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
2 years, 10 months
Migrate Hosted-Engine from one NFS mount to another NFS mount
by Brian Ismay
Hello,
I am currently running Ovirt 4.2.5.3-1.el7 with an Hosted-Engine running on
nfs-based storage.
I am needing to migrate the backend NFS store for the Hosted-Engine to a
new share.
Is there specific documentation for this procedure? I see multiple
references to a general procedure involving backing-up the existing install
and then re-installing and restoring the backup. Is there any way to make a
more clean change of the configuration?
Thanks,
Brian Ismay
2 years, 10 months
Re: VMs losing network interfaces
by Strahil Nikolov
That's a good start.
Are you building your VMs from template ?Any chance there is another system (outside oVirt ?) with the same MAC ?
you can run arping and check if you get response from more than 1 system.
Best Regards,Strahil Nikolov
On Mon, Feb 21, 2022 at 11:15, jb<jonbae77(a)gmail.com> wrote:
Thank you Nikolov,
I have descript the problem a bit wrong. In the UI is see the interface, with virsh dumpxml I get:
<interface type='bridge'>
<mac address='00:1a:4a:16:01:83'/>
<source bridge='ovirtmgmt'/>
<target dev='vnet9'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<filterref filter='vdsm-no-mac-spoofing'/>
<link state='up'/>
<mtu size='1500'/>
<alias name='ua-c8a50041-2d13-456d-acb6-b57fdaea434b'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
lspci on the VM show also the interface and I can also bring the interface up again with: ifup enp1s0. So it only loose the connection, not the interface.
When I restart the VM I get this log message:
/var/log/syslog:Feb 21 10:11:29 tv-planer sh[391]: ifup: failed to bring up enp1s0
/var/log/syslog:Feb 21 10:11:29 tv-planer systemd[1]: ifup(a)enp1s0.service: Main process exited, code=exited, status=1/FAILURE
/var/log/syslog:Feb 21 10:11:29 tv-planer systemd[1]: ifup(a)enp1s0.service: Failed with result 'exit-code'.
But the VM got his IP and is reachable.
Am 20.02.22 um 14:06 schrieb Strahil Nikolov:
Do you see all nic in the UI ? What type are they ?
Set this alias on the Hypervisors: alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' and then use 'virsh dumpxml name-of-vm' to identify how many nics the vm has .
If gou got the correct settings in ovirt, use 'lspci -vvvvv' .
Best Regards, Strahil Nikolov
On Sun, Feb 20, 2022 at 11:32, Jonathan Baecker <jonbae77(a)gmail.com> wrote: Hello everybody,
I have here a strange behavior: We have a 3 node self hosted cluster
with around 20 VMs running on it. Since longer I had the problem with
one VM that after some days it lose the network interface. But because
this VM was only for testing I was to lazy to dive more in, to figure
out what is happen.
Now I have a second VM, with the same problem and this VM is more
important. Both VMs running debian 10 and use cifs mounts, so maybe that
is related?
Have some one of you seeing this behavior? And can give me a hint, how I
can fix this?
At the moment I can't provide a log file, because I didn't know the
exact time, when this was happen. And I also don't know, if the problem
comes from ovirt or from the operating system inside the VMs.
Have a nice day!
Jonathan
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DOXZRQ55LFP...
2 years, 10 months
hosted engine deployment (v4.4.10) - TASK Check engine VM health - fatal FAILED
by Charles Stellen
Dear Ovirt Hackers,
sorry: incidently send to devel(a)ovitr.org
we are dealing with hosted engine deployment issue on a fresh AMD EPYC
servers:
and we are ready to donate hardware to Ovirt community after we pass
this issue ( :-) )
0/ base infra:
- 3 identical physical servers (produced in 2021-4Q)
- fresh, clean and recent version of centos 8 stream installed
(@^minimal-environment)
- servers are interconnected with cisco switch, each other are network
visible,
all with nice internet access (NAT)
1/ storage:
- all 3 servers/nodes host nice and clean glusterfs (v9.5) and volume
"vol-images01" is ready for VM images
- ovirt hosted engine deployment procedure:
- easily accept mentioned glusterfs storage domain
- mount it during "hosted-engine --deploy" with no issue
- all permissions are set correctly at all glustrfs nodes ("chown
vdsm.kvm vol-images01")
- no issue with storage domain at all
2/ ovirt - hosted engine deployment:
- all 3 servers successfully deployed recent ovirt version with standart
procedure
(on top of minimal install of centos 8 stream):
dnf -y install ovirt-host
virt-host-validate: PASS ALL
- at first server we continue with:
dnf -y install ovirt-engine-appliance
hosted-engine --deploy (pure commandline - so no cockpit is used)
DEPLOYMENT ISSUE:
- during "hosted-engine --deploy" procedure - hosted engine becomes
temporairly accessible at:https://server01:6900/ovirt-engine/
- with request to manualy set "ovirtmgmt" virtual nic
- Hosts > server01 > Network Interfaces > [SETUP HOST NETWORKS]
"ovirtmgmt" dropped to eno1 - [OK]
- than All pass fine - and host "server01" becomes Active
- back to commandline to Continue with deployment "Pause execution until
/tmp/ansible.jksf4_n2_he_setup_lock is removed"
by removing the lock file
- deployment than pass all steps_until_ "[ INFO ] TASK
[ovirt.ovirt.hosted_engine_setup : Check engine VM health]"
ISSUE DETAILS: new VM becomes not accessible in the final stage - as it
should be reachable at its final IP:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Engine IP is
different from engine's he_fqdn resolved IP]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
"Engine VM IP address is while the engine's he_fqdn
ovirt-engine.mgmt.pss.local resolves to 10.210.1.101. If you are using
DHCP, check your DHCP reservation configuration"}
- problem is, that even if we go with "Static" IP (provided during
answering procedure) or with "DHCP" way (with properly set DHCP and DNS
server responding with correct IP for both
WE STUCK THERE
WE TRYIED:
- no success to connect to terminal/vnc of running VM "HostedEngine" to
figure out the internal network issue
any suggestion howto "connect" into newly deployed UP and RUNNING
HostedEngine VM? to figure out eventually manualy fix the internal
network issue?
Thank You all for your help
Charles Stellen
PS: we are advanced in Ovirt deployment (from version 4.0), also we are
advanced in GNU/Linux KVM based virtualisation for 10+ years,
so any suggests or any details requested - WE ARE READY to provide
online debuging or direct access to servers is not a problem
PPS: after we pass this deployment - and after decomissioning procedure
- we are ready to provide older HW to Ovirt community
2 years, 10 months
help using nvme/tcp storage with cinderlib and Managed Block Storage
by muli@lightbitslabs.com
Hi everyone,
We are trying to set up ovirt (4.3.10 at the moment, customer preference) to use Lightbits (https://www.lightbitslabs.com) storage via our openstack cinder driver with cinderlib. The cinderlib and cinder driver bits are working fine but when ovirt tries to attach the device to a VM we get the following error:
libvirt: error : cannot create file '/var/run/libvirt/qemu/18-mulivm1.dev/mapper/': Is a directory
We get the same error regardless of whether I try to run the VM or try to attach the device while it is running. The error appears to come from vdsm which passes /dev/mapper as the prefered device?
2022-02-22 09:50:11,848-0500 INFO (vm/3ae7dcf4) [vdsm.api] FINISH appropriateDevice return={'path': '/dev/mapper/', 'truesize': '53687091200', 'apparentsize': '53687091200'} from=internal, task_id=77f40c4e-733d-4d82-b418-aaeb6b912d39 (api:54)
2022-02-22 09:50:11,849-0500 INFO (vm/3ae7dcf4) [vds] prepared volume path: /dev/mapper/ (clientIF:510)
Suggestions for how to debug this further? Is this a known issue? Did anyone get nvme/tcp storage working with ovirt and/or vdsm?
Thanks,
Muli
2 years, 10 months
Migrating VMs from 4.3 Ovirt Env to new 4.4 Env
by Abe E
So as title states I am moving VMs from an old system of ours with alot of issues to a new 4.4 HC Gluster envr although it seems I am running into what I have learnt is a 4.3 bug of some sort with exporting OVAs.
Some of my OVAs are showing large GB sizes although their actual size may be 20K so they are showing no boot device as well, theres really no data there. Some OVAs are fine but some are broken. So I was wondering if anyone has dealt with this and whats the best way around this issue?
I have no access remotely at the moment but when I get to them, I read somewhere its better to just detach disks and download them to a HDD for example and build new VMs and upload those disks and attach them instead this way?
Please do share if you have any better ways, usually I just Export OVA to an external HDD, remount to new ovirt and import OVA but seems a few of these VMs out of alot did not succeed and I am running out of downtime window.
Thanks a bunch, I have noticed repeated names helping me, I am very grateful for your help.
2 years, 10 months
Re: oVirt alternatives
by Thomas Hoberg
That's exactly the direction I originally understood oVirt would go, with the ability to run VMs and container side-by-side on the bare metal or nested with containers inside VMs for stronger resource or security isolation and network virtualization. To me it sounded especially attractive with an HCI underpinning so you could deploy it also in the field with small 3 node clusters.
But combining all those features evidently comes at too high a cost for all the integration and the customer base is either too small or too poor: the cloud players are all out on making sure you no longer run any hardware and then it's really just about pushing your applications there as cloud native or "IaaS" compatible as needed.
E.g. I don't see PCI pass-through coming to kubevirt to enable GPU use, because it ties the machine to a specific host and goes against the grain of K8 as I understand it.
Memory overcommit is quite funny, really, because it's the same issue as the original virtual memory: essentially you lie to your consumer about the resources available and then swap pages forth and back in an attempt to make all your consumers happy. It was processes for virtual memory, it's VMs now for the hypervisor and in both cases it's about the consumer and the provider not continously negotiating for the resources they need and the price they are willing to pay.
That negotiation is always better at the highest level of abstraction, the application itself, which why implementing it at the lower levels (e.g. VMs) becomes less useful and needed.
And then there is technology like CXL which essentially turns RAM in to a fabric and your local CPU will just get RAM from another piece of hardware when your application needs more RAM and is willing to pay the premium something will charge for it.
With that type of hardware much of what hypervisors used to do goes into DPUs/IPUs and CPUs are just running applications making hypercalls. The kernel is just there to bootstrap.
Not sure we'll see that type of hardware at home or in the edge, though...
2 years, 10 months