[Adding users]
On Tue, Feb 7, 2017 at 7:52 PM, paf1(a)email.cz <paf1(a)email.cz> wrote:
Dear Sahina,
*Q:*
"Any reason why the Arbiter node will not be managed by oVirt?
*A:*
1) The arbiter node ( which running this oVirt too ) is not powerfull in
CPU/mem/lan sources to run VMs . Just only to keep master data and ovirt
envir.
2) We wan't to migrate VMs to this node ( arbiter )
I think the oVirt scheduling policy can be set to NOT run VMs on the
arbiter node. This is something you can explore to solve your first 2
points.
The reason it'll be helpful to have arbiter also managed by oVirt is so
that the maintenance and fencing logic are aware of the state of all nodes
in the cluster. There's logic added specific to gluster to ensure that
quorum is not lost while moving a node to maintenance/when a node is fenced.
3) Oracle DB requests licences for all nodes in cluster ( lic. per
each
available CPU socket in cluster ). Nobody will not spent money for
unuseable CPU socket , especially dedicated cluster user :-( )
For this one, I have no solution
From that reasons arbiter not in cluster included.
regards
Pavel
On 02/07/2017 02:33 PM, Sahina Bose wrote:
On Mon, Feb 6, 2017 at 6:36 PM, paf1(a)email.cz <paf1(a)email.cz> wrote:
> Hello everybody,
>
> We are using oVirt Engine Version: 4.0.6.3-1.el7.centos on centos 7.3
> with gluster replica 3 arbiter = (1+1)+1
>
> I'm confused with GUI delaying - if node details are wanted ( cluster ->
> nodes -> node detail = click on node raw ) then request generate over 10
> min delay to display details. This unexpected mistake didn't occure
> initially, but later - not specified when .
>
I don't think the delay in GUI has anything to do with the logs below. Are
there any logs related to retrieving node details?
>
> The followed partial list of "engine.log" shows requests to "arbiter
> node" ( 16.0.0.159) connectivity.
> This requested 3rd node of gluster(arbiter) is NOT included in oVirt
> environment and will NOT.
> Maybe this is that problem, but I'm not shure, especially how to fix this.
>
Any reason why the Arbiter node will not be managed by oVirt?
>
> 2017-02-06 13:20:03,924 INFO [org.ovirt.engine.core.vdsbrok
> er.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3)
> [49cebf0] START, GlusterServersListVDSCommand(HostName = 1kvm2,
> VdsIdVDSCommandParametersBase:{runAsync='true',
> hostId='258decac-46f4-4c15-b855-ad97b570ee60'}), log id: 6873151
> 2017-02-06 13:20:04,796 INFO [org.ovirt.engine.core.vdsbrok
> er.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler3)
> [49cebf0] FINISH, GlusterServersListVDSCommand, return: [
> 172.16.5.162/24:CONNECTED, 172.16.5.161:CONNECTED, 16.0.0.159:CONNECTED],
> log id: 6873151
> 2017-02-06 13:20:04,814 INFO [org.ovirt.engine.core.vdsbrok
> er.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3)
> [49cebf0] START, GlusterVolumesListVDSCommand(HostName = 1kvm2,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='258decac-46f4-4c15-b855-ad97b570ee60'}), log id: 381ae630
> 2017-02-06 13:20:05,970 WARN [org.ovirt.engine.core.vdsbrok
> er.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler3)
> [49cebf0] Could not add brick '16.0.0.159:/GLUSTER/1KVM12-sda2/GFS' to
> volume '19c27787-f1c9-4dee-8415-c6d1c81e3aa2' - server uuid
> 'f7670ea9-2204-4310-96a6-243c2c6a00de' not found in cluster
> '587fa2d8-017d-03b3-0003-00000000030d'
> 2017-02-06 13:20:05,987 WARN [org.ovirt.engine.core.vdsbrok
> er.gluster.GlusterVolumesListReturnForXmlRpc] (DefaultQuartzScheduler3)
> [49cebf0] Could not add brick '16.0.0.159:/GLUSTER/1KVM12-sda1/GFS' to
> volume '96adac2a-0dc4-4bd8-ad79-23dd3448f73b' - server uuid
> 'f7670ea9-2204-4310-96a6-243c2c6a00de' not found in cluster
> '587fa2d8-017d-03b3-0003-00000000030d'
> 2017-02-06 13:20:05,987 INFO [org.ovirt.engine.core.vdsbrok
> er.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler3)
> [49cebf0] FINISH, GlusterVolumesListVDSCommand, return:
> {19c27787-f1c9-4dee-8415-c6d1c81e3aa2=org.ovirt.engine.core.
> common.businessentities.gluster.GlusterVolumeEntity@b9f51962,
> 96adac2a-0dc4-4bd8-ad79-23dd3448f73b=org.ovirt.engine.core.
> common.businessentities.gluster.GlusterVolumeEntity@86597dda}, log id:
> 381ae630
>
> repeatelly occured several times per minute, so huge filling logs
>
>
> OS Version:RHEL - 7 - 3.1611.el7.centos
> OS Description:CentOS Linux 7 (Core)
> Kernel Version:3.10.0 - 514.6.1.el7.x86_64
> KVM Version:2.6.0 - 28.el7_3.3.1
> LIBVIRT Version:libvirt-2.0.0-10.el7_3.4
> VDSM Version:vdsm-4.18.21-1.el7.centos
> SPICE Version:0.12.4 - 19.el7
> GlusterFS Version:glusterfs-3.8.8-1.el7
> CEPH Version:librbd1-0.94.5-1.el7
>
>
> regards
> Paf1
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
>