Problem with logical network
by Alex Irmel Oviedo Solis
Hello, I'm trying to run a vm with two ethernet interfaces, one of them attached to ovirmnt and the other attached to a logical network named internal_servers and got this error:
"VM is down with error. Exit message: can not get MTU interface in 'br-int': No such device2
Then I created the bridge manually but I get the following error:
"VM is down with error. Exit message: Hook Error: ('',)."
Please help me.
Best Regards
5 years, 1 month
oVirt 4.3.5 glusterfs 6.3 performance tunning
by adrianquintero@gmail.com
Hello,
I have a hyperconverged setup using ovirt 4.3.5 and the "optimize for ovirt store" seems to fail on gluster volumes.
I am seeing poor performance and trying to see how should I tune gluster to give better performance.
Can you provide any suggestions on the following volume settings(parameters)?
option Value
------ -----
cluster.lookup-unhashed on
cluster.lookup-optimize on
cluster.min-free-disk 10%
cluster.min-free-inodes 5%
cluster.rebalance-stats off
cluster.subvols-per-directory (null)
cluster.readdir-optimize off
cluster.rsync-hash-regex (null)
cluster.extra-hash-regex (null)
cluster.dht-xattr-name trusted.glusterfs.dht
cluster.randomize-hash-range-by-gfid off
cluster.rebal-throttle normal
cluster.lock-migration off
cluster.force-migration off
cluster.local-volume-name (null)
cluster.weighted-rebalance on
cluster.switch-pattern (null)
cluster.entry-change-log on
cluster.read-subvolume (null)
cluster.read-subvolume-index -1
cluster.read-hash-mode 1
cluster.background-self-heal-count 8
cluster.metadata-self-heal off
cluster.data-self-heal off
cluster.entry-self-heal off
cluster.self-heal-daemon on
cluster.heal-timeout 600
cluster.self-heal-window-size 1
cluster.data-change-log on
cluster.metadata-change-log on
cluster.data-self-heal-algorithm full
cluster.eager-lock enable
disperse.eager-lock on
disperse.other-eager-lock on
disperse.eager-lock-timeout 1
disperse.other-eager-lock-timeout 1
cluster.quorum-type auto
cluster.quorum-count (null)
cluster.choose-local off
cluster.self-heal-readdir-size 1KB
cluster.post-op-delay-secs 1
cluster.ensure-durability on
cluster.consistent-metadata no
cluster.heal-wait-queue-length 128
cluster.favorite-child-policy none
cluster.full-lock yes
diagnostics.latency-measurement off
diagnostics.dump-fd-stats off
diagnostics.count-fop-hits off
diagnostics.brick-log-level INFO
diagnostics.client-log-level INFO
diagnostics.brick-sys-log-level CRITICAL
diagnostics.client-sys-log-level CRITICAL
diagnostics.brick-logger (null)
diagnostics.client-logger (null)
diagnostics.brick-log-format (null)
diagnostics.client-log-format (null)
diagnostics.brick-log-buf-size 5
diagnostics.client-log-buf-size 5
diagnostics.brick-log-flush-timeout 120
diagnostics.client-log-flush-timeout 120
diagnostics.stats-dump-interval 0
diagnostics.fop-sample-interval 0
diagnostics.stats-dump-format json
diagnostics.fop-sample-buf-size 65535
diagnostics.stats-dnscache-ttl-sec 86400
performance.cache-max-file-size 0
performance.cache-min-file-size 0
performance.cache-refresh-timeout 1
performance.cache-priority
performance.cache-size 32MB
performance.io-thread-count 16
performance.high-prio-threads 16
performance.normal-prio-threads 16
performance.low-prio-threads 32
performance.least-prio-threads 1
performance.enable-least-priority on
performance.iot-watchdog-secs (null)
performance.iot-cleanup-disconnected- reqsoff
performance.iot-pass-through false
performance.io-cache-pass-through false
performance.cache-size 128MB
performance.qr-cache-timeout 1
performance.cache-invalidation false
performance.ctime-invalidation false
performance.flush-behind on
performance.nfs.flush-behind on
performance.write-behind-window-size 1MB
performance.resync-failed-syncs-after -fsyncoff
performance.nfs.write-behind-window-s ize1MB
performance.strict-o-direct on
performance.nfs.strict-o-direct off
performance.strict-write-ordering off
performance.nfs.strict-write-ordering off
performance.write-behind-trickling-wr iteson
performance.aggregate-size 128KB
performance.nfs.write-behind-tricklin g-writeson
performance.lazy-open yes
performance.read-after-open yes
performance.open-behind-pass-through false
performance.read-ahead-page-count 4
performance.read-ahead-pass-through false
performance.readdir-ahead-pass-throug h false
performance.md-cache-pass-through false
performance.md-cache-timeout 1
performance.cache-swift-metadata true
performance.cache-samba-metadata false
performance.cache-capability-xattrs true
performance.cache-ima-xattrs true
performance.md-cache-statfs off
performance.xattr-cache-list
performance.nl-cache-pass-through false
features.encryption off
network.frame-timeout 1800
network.ping-timeout 30
network.tcp-window-size (null)
client.ssl off
network.remote-dio off
client.event-threads 4
client.tcp-user-timeout 0
client.keepalive-time 20
client.keepalive-interval 2
client.keepalive-count 9
network.tcp-window-size (null)
network.inode-lru-limit 16384
auth.allow *
auth.reject (null)
transport.keepalive 1
server.allow-insecure on
server.root-squash off
server.all-squash off
server.anonuid 65534
server.anongid 65534
server.statedump-path /var/run/gluster
server.outstanding-rpc-limit 64
server.ssl off
auth.ssl-allow *
server.manage-gids off
server.dynamic-auth on
client.send-gids on
server.gid-timeout 300
server.own-thread (null)
server.event-threads 4
server.tcp-user-timeout 42
server.keepalive-time 20
server.keepalive-interval 2
server.keepalive-count 9
transport.listen-backlog 1024
transport.address-family inet
performance.write-behind on
performance.read-ahead off
performance.readdir-ahead on
performance.io-cache off
performance.open-behind on
performance.quick-read off
performance.nl-cache off
performance.stat-prefetch on
performance.client-io-threads on
performance.nfs.write-behind on
performance.nfs.read-ahead off
performance.nfs.io-cache off
performance.nfs.quick-read off
performance.nfs.stat-prefetch off
performance.nfs.io-threads off
performance.force-readdirp true
performance.cache-invalidation false
performance.global-cache-invalidation true
features.uss off
features.snapshot-directory .snaps
features.show-snapshot-directory off
features.tag-namespaces off
network.compression off
network.compression.window-size -15
network.compression.mem-level 8
network.compression.min-size 0
network.compression.compression-level -1
network.compression.debug false
features.default-soft-limit 80%
features.soft-timeout 60
features.hard-timeout 5
features.alert-time 86400
features.quota-deem-statfs off
geo-replication.indexing off
geo-replication.indexing off
geo-replication.ignore-pid-check off
geo-replication.ignore-pid-check off
features.quota off
features.inode-quota off
features.bitrot disable
debug.trace off
debug.log-history no
debug.log-file no
debug.exclude-ops (null)
debug.include-ops (null)
debug.error-gen off
debug.error-failure (null)
debug.error-number (null)
debug.random-failure off
debug.error-fops (null)
nfs.disable on
features.read-only off
features.worm off
features.worm-file-level off
features.worm-files-deletable on
features.default-retention-period 120
features.retention-mode relax
features.auto-commit-period 180
storage.linux-aio off
storage.batch-fsync-mode reverse-fsync
storage.batch-fsync-delay-usec 0
storage.owner-uid 36
storage.owner-gid 36
storage.node-uuid-pathinfo off
storage.health-check-interval 30
storage.build-pgfid off
storage.gfid2path on
storage.gfid2path-separator :
storage.reserve 1
storage.health-check-timeout 10
storage.fips-mode-rchecksum off
storage.force-create-mode 0
storage.force-directory-mode 0
storage.create-mask 777
storage.create-directory-mask 777
storage.max-hardlinks 100
features.ctime on
config.gfproxyd off
cluster.server-quorum-type server
cluster.server-quorum-ratio 0
changelog.changelog off
changelog.changelog-dir {{ brick.path }}/.glusterfs/changelogs
changelog.encoding ascii
changelog.rollover-time 15
changelog.fsync-interval 5
changelog.changelog-barrier-timeout 120
changelog.capture-del-path off
features.barrier disable
features.barrier-timeout 120
features.trash off
features.trash-dir .trashcan
features.trash-eliminate-path (null)
features.trash-max-filesize 5MB
features.trash-internal-op off
cluster.enable-shared-storage disable
locks.trace off
locks.mandatory-locking off
cluster.disperse-self-heal-daemon enable
cluster.quorum-reads no
client.bind-insecure (null)
features.shard on
features.shard-block-size 64MB
features.shard-lru-limit 16384
features.shard-deletion-rate 100
features.scrub-throttle lazy
features.scrub-freq biweekly
features.scrub false
features.expiry-time 120
features.cache-invalidation off
features.cache-invalidation-timeout 60
features.leases off
features.lease-lock-recall-timeout 60
disperse.background-heals 8
disperse.heal-wait-qlength 128
cluster.heal-timeout 600
dht.force-readdirp on
disperse.read-policy gfid-hash
cluster.shd-max-threads 8
cluster.shd-wait-qlength 10000
cluster.locking-scheme granular
cluster.granular-entry-heal enable
features.locks-revocation-secs 0
features.locks-revocation-clear-all false
features.locks-revocation-max-blocked 0
features.locks-monkey-unlocking false
features.locks-notify-contention no
features.locks-notify-contention-dela y 5
disperse.shd-max-threads 1
disperse.shd-wait-qlength 1024
disperse.cpu-extensions auto
disperse.self-heal-window-size 1
cluster.use-compound-fops off
performance.parallel-readdir off
performance.rda-request-size 131072
performance.rda-low-wmark 4096
performance.rda-high-wmark 128KB
performance.rda-cache-limit 10MB
performance.nl-cache-positive-entry false
performance.nl-cache-limit 10MB
performance.nl-cache-timeout 60
cluster.brick-multiplex off
cluster.max-bricks-per-process 250
disperse.optimistic-change-log on
disperse.stripe-cache 4
cluster.halo-enabled False
cluster.halo-shd-max-latency 99999
cluster.halo-nfsd-max-latency 5
cluster.halo-max-latency 5
cluster.halo-max-replicas 99999
cluster.halo-min-replicas 2
features.selinux on
cluster.daemon-log-level INFO
debug.delay-gen off
delay-gen.delay-percentage 10%
delay-gen.delay-duration 100000
delay-gen.enable
disperse.parallel-writes on
features.sdfs off
features.cloudsync off
features.ctime on
ctime.noatime on
feature.cloudsync-storetype (null)
features.enforce-mandatory-lock off
thank you!
5 years, 1 month
Re: Recover node from partial upgrade
by Strahil
You can boot from the older image (grub menu) and then a cleanup procedure should be performed.
I'm not sure if the thin LVs will be removed when you rerun the upgrade process , or they will remain the same.
Beat Regards,
Strahil NikolovOn Oct 4, 2019 14:04, Stefano Danzi <s.danzi(a)hawai.it> wrote:
>
> Hello!!!
>
> I upgraded oVirtNode 4.3.5.2 to 4.3.6 using Engine UI.
> In the middle of the upgrade Engine fenced it, I can't understand why.
>
> Now Engine UI, and yum, report that there are no updates but node still
> to version 4.3.5.2.
> If I run "yum reinstall ovirt-node-ng-image-update" The result is as
> follow, but the system remain not upgraded.
>
> ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpm | 719 MB 00:06:31
> Running transaction check
> Running transaction test
> Transaction test succeeded
> Running transaction
> Installazione : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1
> warning: %post(ovirt-node-ng-image-update-4.3.6-1.el7.noarch) scriptlet
> failed, exit status 1
> Non-fatal POSTIN scriptlet failure in rpm package
> ovirt-node-ng-image-update-4.3.6-1.el7.noarch
> Uploading Package Profile
> Cannot upload package profile. Is this client registered?
> Verifica in corso : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1
>
> Installato:
> ovirt-node-ng-image-update.noarch 0:4.3.6-1.el7
>
> Completo!
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Y52X663MAR3...
5 years, 1 month
Recover node from partial upgrade
by Stefano Danzi
Hello!!!
I upgraded oVirtNode 4.3.5.2 to 4.3.6 using Engine UI.
In the middle of the upgrade Engine fenced it, I can't understand why.
Now Engine UI, and yum, report that there are no updates but node still
to version 4.3.5.2.
If I run "yum reinstall ovirt-node-ng-image-update" The result is as
follow, but the system remain not upgraded.
ovirt-node-ng-image-update-4.3.6-1.el7.noarch.rpm | 719 MB 00:06:31
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installazione : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1
warning: %post(ovirt-node-ng-image-update-4.3.6-1.el7.noarch) scriptlet
failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package
ovirt-node-ng-image-update-4.3.6-1.el7.noarch
Uploading Package Profile
Cannot upload package profile. Is this client registered?
Verifica in corso : ovirt-node-ng-image-update-4.3.6-1.el7.noarch 1/1
Installato:
ovirt-node-ng-image-update.noarch 0:4.3.6-1.el7
Completo!
5 years, 1 month
Re: Ovirt 4.2.7 won't start and drops to emergency console
by Strahil
Trust Red Hat :)
At least their approach should be safer.
Of course, you can raise a docu bug but RHEL7 is in such phase that it might not be fixed unless this is found in v8.
Best Regards,
Strahil NikolovOn Oct 2, 2019 05:43, jeremy_tourville(a)hotmail.com wrote:
>
> http://man7.org/linux/man-pages/man7/lvmthin.7.html
> Command to repair a thin pool:
> lvconvert --repair VG/ThinPoolLV
>
> Repair performs the following steps:
>
> 1. Creates a new, repaired copy of the metadata.
> lvconvert runs the thin_repair command to read damaged metadata from
> the existing pool metadata LV, and writes a new repaired copy to the
> VG's pmspare LV.
>
> 2. Replaces the thin pool metadata LV.
> If step 1 is successful, the thin pool metadata LV is replaced with
> the pmspare LV containing the corrected metadata. The previous thin
> pool metadata LV, containing the damaged metadata, becomes visible
> with the new name ThinPoolLV_tmetaN (where N is 0,1,...).
>
> If the repair works, the thin pool LV and its thin LVs can be
> activated, and the LV containing the damaged thin pool metadata can
> be removed. It may be useful to move the new metadata LV (previously
> pmspare) to a better PV.
>
> If the repair does not work, the thin pool LV and its thin LVs are
> lost.
>
> This info seems to conflict with Red Hat advice. Red Hat says if metadat volume is full don't run a lvconvert --repair operation. Now I am confused. I am familiar with LVM and comfortable with it but this is my first time trying to repair thin LVM. The concepts of a metadata volume are new to me.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FFN3HWUQVRP...
5 years, 1 month
Re: Cannot enable maintenance mode
by Benny Zlotnik
Did you try the "Confirm Host has been rebooted" button?
On Wed, Oct 2, 2019 at 9:17 PM Bruno Martins <bruno.o.martins(a)gfi.world> wrote:
>
> Hello guys,
>
> No ideas for this issue?
>
> Thanks for your cooperation!
>
> Kind regards,
>
> -----Original Message-----
> From: Bruno Martins <bruno.o.martins(a)gfi.world>
> Sent: 29 de setembro de 2019 16:16
> To: users(a)ovirt.org
> Subject: [ovirt-users] Cannot enable maintenance mode
>
> Hello guys,
>
> I am being unable to put a host from a two nodes cluster into maintenance mode in order to remove it from the cluster afterwards.
>
> This is what I see in engine.log:
>
> 2019-09-27 16:20:58,364 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9, Job ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom Event ID: -1, Message: Host CentOS-H1 cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: Non interactive user (User: admin).
>
> Host has been rebooted multiple times. vdsClient shows no VM's running.
>
> What else can I do?
>
> Kind regards,
>
> Bruno Martins
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7GXN...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DD5DW6KKOOH...
5 years, 1 month
mount to removed storage domain on node with HostedEngine
by Mark Steele
Hello,
oVirt Engine Version: 3.5.0.1-1.el6
We recently removed the Data (Master) storage domain from our ovirt cluster
and replaced it with another. All is working great. When looking at the old
storage device I noticed that one of our nodes still has an NFS connection
to it.
Looking at the results for 'mount' I see two mounts to the node in question
(192.168.64.15):
192.168.64.15:/nfs-share/ovirt-store/hosted-engine on
/rhev/data-center/mnt/192.168.64.15:_nfs-share_ovirt-store_hosted-engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.15,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.64.15)
192.168.64.11:/export/testovirt on
/rhev/data-center/mnt/192.168.64.11:_export_testovirt
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.11,mountvers=3,mountport=46034,mountproto=udp,local_lock=none,addr=192.168.64.11)
192.168.64.163:/export/storage on
/rhev/data-center/mnt/192.168.64.163:_export_storage
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.163,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.163)
192.168.64.55:/export/storage on
/rhev/data-center/mnt/192.168.64.55:_export_storage
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.55,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.55)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
192.168.64.15:/nfs-share/ovirt-store/hosted-engine on
/rhev/data-center/mnt/192.168.64.15:_nfs-share_ovirt-store_hosted-engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.15,mountvers=3,mountport=892,mountproto=udp,local_lock=none,addr=192.168.64.15)
10.1.90.64:/ifs/telvue/infrastructure/iso on
/rhev/data-center/mnt/10.1.90.64:_ifs_telvue_infrastructure_iso type nfs
(rw,relatime,vers=3,rsize=131072,wsize=524288,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.1.90.64,mountvers=3,mountport=300,mountproto=udp,local_lock=none,addr=10.1.90.64)
192.168.64.163:/export/storage/iso-store on
/rhev/data-center/mnt/192.168.64.163:_export_storage_iso-store type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=192.168.64.163,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.64.163)
/etc/fstab has no entry for these so I assume they are left over from when
the storage domain existed.
Is it safe to 'umount' these mounts or is there a hook I may not be aware
of? Is there another way of removing this from the node via that OVM?
None of the other nodes in the cluster have this mount. This node is not
the SPM.
Thank you for your time and consideration.
Best regards,
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054
800.885.8886 x128 | msteele(a)telvue.com | http://www.telvue.com
twitter: http://twitter.com/telvue | facebook:
https://www.facebook.com/telvue
5 years, 1 month
ovirt 4.3.6 kickstart install fails when
by adrianquintero@gmail.com
Kikckstart entries:
-------------------------------------------------------------------------------------------------------------
liveimg --url=http://192.168.1.10/ovirt-iso-436/ovirt-node-ng-image.squashfs.img
clearpart --drives=sda --initlabel --all
autopart --type=thinp
rootpw --iscrypted $1$xxxxxxxxxxxbSLxxxxxxxxxxgwc0
lang en_US
keyboard --vckeymap=us --xlayouts='us'
timezone --utc America/New_York --ntpservers=0.centos.pool.ntp.org,1.centos.pool.ntp.org,2.centos.pool.ntp.org,3.centos.pool.ntp
#network --hostname=host21.example.com
network --onboot yes --device eno49
zerombr
text
reboot
-------------------------------------------------------------------------------------------------------------
The Error on screen right after " Creating swap on /dev/mapper/onn_host1-swap"
DeviceCreatorError: ('lvcreate failed for onn_host1/pool00: running /sbin/lvm lvcreate --thinpool onn_host1/pool00 --size 464304m --poolmetadatasize 232 --chunksize 64 --config devices { preffered_names=["^/dev/mapper/", "^/dev/md", "^/dev/sd"] } failed', ' onn_host1-pool00')
No issue with oVirt 4.3.5 and the same kickstart file.
Any suggestions?
Thanks,
AQ
5 years, 1 month
oVirt Gluster Fails to Failover after node failure
by Robert Crawford
One of my nodes has failed and the domain isn't coming online because the primary node isn't up?
In the parameters there is backup-volfile-servers=192.168.100.2:192.168.100.3
Any help?
5 years, 1 month
oVirt 4.3.5 WARN no gluster network found in cluster
by adrianquintero@gmail.com
Hi,
I have a 4.3.5 hyperconverged setup with 3 hosts, each host has 2x10G NIC ports
Host1:
NIC1: 192.168.1.11
NIC2: 192.168.0.67 (Gluster)
Host2:
NIC1: 10.10.1.12
NIC2: 192.168.0.68 (Gluster)
Host3:
NIC1: 10.10.1.13
NIC2: 192.168.0.69 (Gluster)
I am able to ping all the gluster IPs from within the hosts. i.e from host1 i can ping 192.168.0.68 and 192.168.0.69
However from the HostedEngine VM I cant ping any of those IPs
[root@ovirt-engine ~]# ping 192.168.0.9
PING 192.168.0.60 (192.168.0.60) 56(84) bytes of data.
From 10.10.255.5 icmp_seq=1 Time to live exceeded
and on the HostedEngine I see the following WARNINGS (only for host1) which make me think that I am not using a separate network exclusively for gluster.
2019-08-21 21:04:34,215-04 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler10) [397b6038] Could not associate brick 'host1.example.com:/gluster_bricks/engine/engine' of volume 'ac1f73ce-cdf0-4bb9-817d-xxxxxxcxxx' with correct network as no gluster network found in cluster 'xxxxxxxx11e9-b8d3-00163e5d860d'
Any ideas?
thank you!!
2019-08-21 21:04:34,220-04 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler10) [397b6038] Could not associate brick 'vmm01.virt.ord1d:/gluster_bricks/data/data' of volume 'bc26633a-9a0b-49de-b714-97e76f222a02' with correct network as no gluster network found in cluster 'e98e2c16-c31e-11e9-b8d3-00163e5d860d'
2019-08-21 21:04:34,224-04 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler10) [397b6038] Could not associate brick 'host1:/gluster_bricks/vmstore/vmstore' of volume 'xxxxxxxxx-ca96-45cc-9e0f-649055e0e07b' with correct network as no gluster network found in cluster 'e98e2c16-c31e-11e9-b8d3xxxxxxxxxxxxxxx'
5 years, 1 month