VMs import over slow 1gig interface instead of fast 10gig interface?
by Jacob Green
Ok, so I want to try and more thoroughly explain my situation. I
want to better understand what is happening, and if there is a way to
force import over the faster connection.
Today I have two Ovirt environments, one on 4.1 and one on 4.2, on
the new environment we have migrated to new VLANs for Management and Data.
The export domain is a NFS domain hosted on a Trunas device that
has an ip address of X.X.12.100.
Our host has multiple Ethernet connections, one Bonded 20Gigabit
interface. That is where we have the ovirtmgmt profile. And a secondary
1 Gigabit interface that we call our VLAN12 profile or server VLAN.
*The problem.... *When importing a VM from the export domain, the import
is happening over the 1Gigabit interface on the VLAN12 profile. Instead
of the 20gigabit Bonded interface on VLAN 100.
*My understanding.... *So I probably just misunderstand how this works,
but here is what I thought should happen, in the *Setup Host Network*
screen for this host I have the ovirtmgmt on VLAN100 with the bonded
20Gigabit interface performing the following. "*Management, Display,
Migration, VM and Default Route*." Then I have the secondary 1 Gigabit
interface just doing "*VM*" on VLAN12. What I thought should happen is
that anything Migration related "*/Includes importing from export
domain?/*" should happen on the Migration interface, which in my case is
the ovritmgmt interface on VLAN100.
I have also confirmed that I can ping from the VLAN100 ovirtmgmt
interface to the storage on VLAN12. So there is connectivity from that
interface to the storage. What I am trying to figure out is there a way
to force it to do imports over the 20Gigabit interface that resides on
VLAN100 instead of the slower VLAN12 interface, that is just there for
general VM connectivity.
Thank you for any advice or guidance on this matter.
--
Jacob Green
Systems Admin
American Alloy Steel
5 years, 10 months
Re: OvfUpdateIntervalInMinutes restored to original value too early?
by Andreas Elvers
I suppose the /var/log/ovirt-hosted-engine-setup/engine-logs-2019-05-10T16:16:14Z/ovirt-engine/engine.log is the bootstrap engine log. It is attached.
The bootstrap engine looks quite ok when glancing through the dashboard, datacenter, storage domains,… Although there are two old hosted storage domains.
5 years, 10 months
Upgrade engine to 4.3.3 or setup a new engine
by Magnus Isaksson
Hello all
I'm having trouble upgrading my engine to 4.3.3 from 4.2.8
The upgrade goes well, no errors but webgui wont load, it just says "waiting for engine"
I have tried to set up a new machine with engine and import the DB from the old(after upgrade) still the same problem.
So after quite a few tries i gave up and set up a fresh new engine 4.3.3
But can i move/migrate the hosts with running VM:s in to the new engine?
In vmware vcenter it is just to add the host and all VM:s and stuff comes over, but im guessing from what ive read that it is not that simple with ovirt.
Or how should i solve this?
//Magnus
5 years, 10 months
HostedEngine cleaned up
by Sakhi Hadebe
Hi,
We have a situation where the HostedEngine was cleaned up and the VMs are
no longer running. Looking at the logs we can see the drive files as:
2019-03-26T07:42:46.915838Z qemu-kvm: -drive
file=/rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685,format=qcow2,if=none,id=drive-ua-b2b872cd-b468-4f14-ae20-555ed823e84b,serial=b2b872cd-b468-4f14-ae20-555ed823e84b,werror=stop,rerror=stop,cache=none,aio=native:
'serial' is deprecated, please use the corresponding option of '-device'
instead
I assume this is the disk was writing to before it went down. Trying to
list the file gives an error and the file is not there;
ls -l
/rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685
Is there a way we can recover the VM's disk images ?
NOTE: No HostedEngine backups
--
Regards,
Sakhi Hadebe
5 years, 10 months
engine-setup failed in upgrade step to 4.3.2 with DNF errors on Centos 7.6
by Oliver Riesener
# engine-setup
```
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files:
['/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf',
'/etc/ovirt-engine-setup.conf.d/10-packaging.conf',
'/etc/ovirt-engine-setup.conf.d/20-setup-aio.conf',
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf']
Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20190329154037-huchep.log
Version: otopi-1.8.1 (otopi-1.8.1-1.el7)
[ INFO ] DNF Downloading 1 files, 0.00KB
[ INFO ] DNF Downloaded Extra Packages for Enterprise Linux 7 - x86_64
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup (late)
[ INFO ] Stage: Environment customization
--== PRODUCT OPTIONS ==--
Set up Cinderlib integration
(Currently in tech preview)
(Yes, No) [No]:
[ INFO ] ovirt-provider-ovn already installed, skipping.
--== PACKAGES ==--
[ INFO ] Checking for product updates...
[ ERROR ] DNF 'Base' object has no attribute '_group_persistor'
[ INFO ] DNF Performing DNF transaction rollback
[ ERROR ] Failed to execute stage 'Environment customization': 'Base'
object has no attribute '_group_persistor'
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20190329154037-huchep.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20190329154107-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
What can i do ?
```
5 years, 10 months
[Users] Lifecycle / upgradepath
by Sven Kieske
Hi Community,
Currently, there is no single document describing supported
(which means: working ) upgrade scenarios.
I think the project has matured enough, to have such an supported
upgradepath, which should be considered in the development of new
releases.
As far as I know, currently it is supported to upgrade
from x.y.z to x.y.z+1 and from x.y.z to x.y+1.z
but not from x.y-1.z to x.y+1.z directly.
maybe this should be put together in a wiki page at least.
also it would be cool to know how long a single "release"
would be supported.
In this context I would define a release as a version
bump from x.y.z to x.y+1.z or to x+1.y.z
a bump in z would be a bugfix release.
The question is, how long will we get bugfix releases
for a given version?
What are your thoughts?
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
5 years, 10 months
Planned restart of production services
by Evgheni Dereveanchin
Hi everyone,
I will be restarting several production systems within the following hour
to apply updates.
The following services may be unreachable for some period of time:
- resources.ovirt.org - package repositories
- jenkins.ovirt.org - CI master
Package repositories will be unreachable for a short period of time.
No new CI jobs will be started during this period.
I will announce you once maintenance is complete.
--
Regards,
Evgheni Dereveanchin
5 years, 10 months
[Users] Nested virtualization with Opteron 2nd generation and oVirt 3.1 possible?
by Gianluca Cecchi
Hello,
I have 2 physical servers with Opteron 2nd gen cpu.
There is CentOS 6.3 installed and some VM already configured on them.
Their /proc/cpuinfo contains
...
model name : Dual-Core AMD Opteron(tm) Processor 8222
...
kvm_amd kernel module is loaded with its default enabled nested option
# systool -m kvm_amd -v
Module = "kvm_amd"
Attributes:
initstate = "live"
refcnt = "15"
srcversion = "43D8067144E7D8B0D53D46E"
Parameters:
nested = "1"
npt = "1"
...
I already configured a fedora 17 VM as a oVirt 3.1 Engine
I'm trying to configure another VM as oVirt 3.1 node with
ovirt-node-iso-2.5.5-0.1.fc17.iso
It seems I'm not able to configure so that ovirt install doesn't complain.
After some attempts, I tried this in my vm.xml for the cpu:
<cpu mode='custom' match='exact'>
<model fallback='allow'>athlon</model>
<vendor>AMD</vendor>
<feature policy='require' name='pni'/>
<feature policy='require' name='rdtscp'/>
<feature policy='force' name='svm'/>
<feature policy='require' name='clflush'/>
<feature policy='require' name='syscall'/>
<feature policy='require' name='lm'/>
<feature policy='require' name='cr8legacy'/>
<feature policy='require' name='ht'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='require' name='fxsr_opt'/>
<feature policy='require' name='cx16'/>
<feature policy='require' name='extapic'/>
<feature policy='require' name='mca'/>
<feature policy='require' name='cmp_legacy'/>
</cpu>
Inside node /proc/cpuinfo becomes
processor : 3
vendor_id : AuthenticAMD
cpu family : 6
model : 2
model name : QEMU Virtual CPU version 0.12.1
stepping : 3
microcode : 0x1000065
cpu MHz : 3013.706
cache size : 512 KB
fpu : yes
fpu_exception : yes
cpuid level : 2
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
pse36 clflush mmx fxsr sse sse2 syscall mmxext fxsr_opt lm nopl pni
cx16 hypervisor lahf_lm cmp_legacy cr8_legacy
bogomips : 6027.41
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
2 questions:
1) Is there any combination in xml file to give to my VM so that oVirt
doesn't complain about missing hardware virtualization with this
processor?
2) suppose 1) is not possible in my case and I still want to test the
interface and try some config operations to see for example the
differences with RHEV 3.0, how can I do?
At the moment this complaint about hw virtualization prevents me to
activate the node.
I get
Installing Host f17ovn01. Step: RHEV_INSTALL.
Host f17ovn01 was successfully approved.
Host f17ovn01 running without virtualization hardware acceleration
Detected new Host f17ovn01. Host state was set to Non Operational.
Host f17ovn01 moved to Non-Operational state.
Host f17ovn01 moved to Non-Operational state as host does not meet the
cluster's minimum CPU level. Missing CPU features : CpuFlags
Can I lower the requirements to be able to operate without hw
virtualization in 3.1?
Thanks in advance,
Gianluca
5 years, 10 months
4.3.3 single node hyperconverged wizard failing because var/log is too small?
by Edward Berger
I'm trying to bring up a single node hyperconverged with the current
node-ng ISO installation,
but it ends with this failure message.
TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
disk space] ***
fatal: [br014.bridges.psc.edu]: FAILED! => {"changed": true, "cmd": "df -m
/var/log | awk '/[0-9]%/ {print $4}'", "delta": "0:00:00.008513", "end":
"2019-05-09 20:09:27.914400", "failed_when_result": true, "rc": 0, "start":
"2019-05-09 20:09:27.905887", "stderr": "", "stderr_lines": [], "stdout":
"7470", "stdout_lines": ["7470"]}
I have what the installer created by default for /var/log, so I don't know
why its complaining.
[root@br014 ~]# df -kh
Filesystem Size Used
Avail Use% Mounted on
/dev/mapper/onn_br014-ovirt--node--ng--4.3.3.1--0.20190417.0+1 3.5T 2.1G
3.3T 1% /
devtmpfs 63G
0 63G 0% /dev
tmpfs 63G
4.0K 63G 1% /dev/shm
tmpfs 63G
18M 63G 1% /run
tmpfs 63G
0 63G 0% /sys/fs/cgroup
/dev/mapper/onn_br014-home 976M 2.6M
907M 1% /home
/dev/mapper/onn_br014-tmp 976M 2.8M
906M 1% /tmp
/dev/mapper/onn_br014-var 15G
42M 14G 1% /var
/dev/sda2 976M 173M
737M 19% /boot
/dev/mapper/onn_br014-var_log 7.8G 41M
7.3G 1% /var/log
/dev/mapper/onn_br014-var_log_audit 2.0G 7.6M
1.8G 1% /var/log/audit
/dev/mapper/onn_br014-var_crash 9.8G 37M
9.2G 1% /var/crash
/dev/sda1 200M 12M
189M 6% /boot/efi
tmpfs 13G
0 13G 0% /run/user/1000
tmpfs 13G
0 13G 0% /run/user/0
/dev/mapper/gluster_vg_sdb-gluster_lv_engine 3.7T 33M
3.7T 1% /gluster_bricks/engine
/dev/mapper/gluster_vg_sdc-gluster_lv_data 3.7T 34M
3.7T 1% /gluster_bricks/data
/dev/mapper/gluster_vg_sdd-gluster_lv_vmstore 3.7T 34M
3.7T 1% /gluster_bricks/vmstore
The machine had 4 4TB disks, so sda is the installation for oVirt node-ng,
the other 3 disks for the gluster volumes.
root@br014 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 onn_br014 lvm2 a-- <3.64t 100.00g
/dev/sdb gluster_vg_sdb lvm2 a-- <3.64t <26.02g
/dev/sdc gluster_vg_sdc lvm2 a-- <3.64t 0
/dev/sdd gluster_vg_sdd lvm2 a-- <3.64t 0
[root@br014 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
gluster_vg_sdb 1 1 0 wz--n- <3.64t <26.02g
gluster_vg_sdc 1 2 0 wz--n- <3.64t 0
gluster_vg_sdd 1 2 0 wz--n- <3.64t 0
onn_br014 1 11 0 wz--n- <3.64t 100.00g
[root@br014 ~]# lvs
LV VG Attr LSize
Pool Origin Data%
Meta% Move Log Cpy%Sync Convert
gluster_lv_engine gluster_vg_sdb -wi-ao----
3.61t
gluster_lv_data gluster_vg_sdc Vwi-aot--- 3.61t
gluster_thinpool_gluster_vg_sdc
0.05
gluster_thinpool_gluster_vg_sdc gluster_vg_sdc twi-aot---
<3.61t
0.05 0.13
gluster_lv_vmstore gluster_vg_sdd Vwi-aot--- 3.61t
gluster_thinpool_gluster_vg_sdd
0.05
gluster_thinpool_gluster_vg_sdd gluster_vg_sdd twi-aot---
<3.61t
0.05 0.13
home onn_br014 Vwi-aotz-- 1.00g
pool00
4.79
ovirt-node-ng-4.3.3.1-0.20190417.0 onn_br014 Vwi---tz-k <3.51t
pool00
root
ovirt-node-ng-4.3.3.1-0.20190417.0+1 onn_br014 Vwi-aotz-- <3.51t
pool00 ovirt-node-ng-4.3.3.1-0.20190417.0
0.13
pool00 onn_br014 twi-aotz--
3.53t
0.19 1.86
root onn_br014 Vri---tz-k <3.51t
pool00
swap onn_br014 -wi-ao----
4.00g
tmp onn_br014 Vwi-aotz-- 1.00g
pool00
4.84
var onn_br014 Vwi-aotz-- 15.00g
pool00
3.67
var_crash onn_br014 Vwi-aotz-- 10.00g
pool00
2.86
var_log onn_br014 Vwi-aotz-- 8.00g
pool00
3.25
var_log_audit onn_br014 Vwi-aotz-- 2.00g
pool00
4.86
Here's the full deploy log from the UI. Let me know if you need specific
logs.
PLAY [Setup backend]
***********************************************************
TASK [Gathering Facts]
*********************************************************
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already
started] ***
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/firewall_config : check if required variables are
set] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports]
********
ok: [br014.bridges.psc.edu] => (item=2049/tcp)
ok: [br014.bridges.psc.edu] => (item=54321/tcp)
ok: [br014.bridges.psc.edu] => (item=5900/tcp)
ok: [br014.bridges.psc.edu] => (item=5900-6923/tcp)
ok: [br014.bridges.psc.edu] => (item=5666/tcp)
ok: [br014.bridges.psc.edu] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to
firewalld rules] ***
ok: [br014.bridges.psc.edu] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS
distribution] ***
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for
debian systems.] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for
RHEL systems.] ***
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for
Debian systems] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array]
***********
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)]
*********
skipping: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'})
skipping: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdc',
u'pvname': u'/dev/sdc'})
skipping: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdd',
u'pvname': u'/dev/sdd'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service]
********
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size]
******
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is
provided] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD]
******
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID]
******
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for
RAID] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
ok: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'})
ok: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdc',
u'pvname': u'/dev/sdc'})
ok: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdd',
u'pvname': u'/dev/sdd'})
TASK [gluster.infra/roles/backend_setup : Create thick logical volume]
*********
ok: [br014.bridges.psc.edu] => (item={u'lvname': u'gluster_lv_engine',
u'vgname': u'gluster_vg_sdb', u'size': u'3700G'})
TASK [gluster.infra/roles/backend_setup : Calculate chunksize for
RAID6/RAID10/RAID5] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Set chunksize for JBOD]
**************
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Create a LV thinpool]
****************
ok: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdc',
u'thinpoolname': u'gluster_thinpool_gluster_vg_sdc', u'poolmetadatasize':
u'16G'})
ok: [br014.bridges.psc.edu] => (item={u'vgname': u'gluster_vg_sdd',
u'thinpoolname': u'gluster_thinpool_gluster_vg_sdd', u'poolmetadatasize':
u'16G'})
TASK [gluster.infra/roles/backend_setup : Create thin logical volume]
**********
ok: [br014.bridges.psc.edu] => (item={u'lvname': u'gluster_lv_data',
u'vgname': u'gluster_vg_sdc', u'thinpool':
u'gluster_thinpool_gluster_vg_sdc', u'lvsize': u'3700G'})
ok: [br014.bridges.psc.edu] => (item={u'lvname': u'gluster_lv_vmstore',
u'vgname': u'gluster_vg_sdd', u'thinpool':
u'gluster_thinpool_gluster_vg_sdd', u'lvsize': u'3700G'})
TASK [gluster.infra/roles/backend_setup : Extend volume group]
*****************
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Change attributes of LV]
*************
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Create LV for cache]
*****************
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Create metadata LV for cache]
********
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Convert logical volume to a cache
pool LV] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Convert logical volume to a cache
pool LV without cachemetalvname] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Convert an existing logical
volume to a cache LV] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Set XFS options for JBOD]
************
ok: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Set XFS options for RAID devices]
****
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Create filesystem on thin logical
vols] ***
ok: [br014.bridges.psc.edu] => (item={u'lvname': u'gluster_lv_data',
u'vgname': u'gluster_vg_sdc', u'thinpool':
u'gluster_thinpool_gluster_vg_sdc', u'lvsize': u'3700G'})
ok: [br014.bridges.psc.edu] => (item={u'lvname': u'gluster_lv_vmstore',
u'vgname': u'gluster_vg_sdd', u'thinpool':
u'gluster_thinpool_gluster_vg_sdd', u'lvsize': u'3700G'})
TASK [gluster.infra/roles/backend_setup : Create filesystem on thick
logical vols] ***
ok: [br014.bridges.psc.edu] => (item={u'lvname': u'gluster_lv_engine',
u'vgname': u'gluster_vg_sdb', u'size': u'3700G'})
TASK [gluster.infra/roles/backend_setup : Create mount directories if not
already present] ***
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/engine',
u'vgname': u'gluster_vg_sdb', u'lvname': u'gluster_lv_engine'})
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/data',
u'vgname': u'gluster_vg_sdc', u'lvname': u'gluster_lv_data'})
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/vmstore',
u'vgname': u'gluster_vg_sdd', u'lvname': u'gluster_lv_vmstore'})
TASK [gluster.infra/roles/backend_setup : Set mount options for VDO]
***********
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_setup : Mount the vdo devices (If any)]
******
skipping: [br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb', u'lvname':
u'gluster_lv_engine'})
skipping: [br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/data', u'vgname': u'gluster_vg_sdc', u'lvname':
u'gluster_lv_data'})
skipping: [br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd', u'lvname':
u'gluster_lv_vmstore'})
TASK [gluster.infra/roles/backend_setup : Mount the devices]
*******************
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/engine',
u'vgname': u'gluster_vg_sdb', u'lvname': u'gluster_lv_engine'})
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/data',
u'vgname': u'gluster_vg_sdc', u'lvname': u'gluster_lv_data'})
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/vmstore',
u'vgname': u'gluster_vg_sdd', u'lvname': u'gluster_lv_vmstore'})
TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux
context on the bricks] ***
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/engine',
u'vgname': u'gluster_vg_sdb', u'lvname': u'gluster_lv_engine'})
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/data',
u'vgname': u'gluster_vg_sdc', u'lvname': u'gluster_lv_data'})
ok: [br014.bridges.psc.edu] => (item={u'path': u'/gluster_bricks/vmstore',
u'vgname': u'gluster_vg_sdd', u'lvname': u'gluster_lv_vmstore'})
TASK [gluster.infra/roles/backend_setup : restore file(s) default SELinux
security contexts] ***
changed: [br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb', u'lvname':
u'gluster_lv_engine'})
changed: [br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/data', u'vgname': u'gluster_vg_sdc', u'lvname':
u'gluster_lv_data'})
changed: [br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd', u'lvname':
u'gluster_lv_vmstore'})
TASK [gluster.infra/roles/backend_reset : unmount the directories (if
mounted)] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_reset : Delete volume groups]
****************
skipping: [br014.bridges.psc.edu]
TASK [gluster.infra/roles/backend_reset : Remove VDO devices]
******************
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Create temporary storage
directory] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Get the name of the directory
created] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : check if
gluster_features_ganesha_clusternodes is set] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Define service port]
****************
skipping: [br014.bridges.psc.edu] => (item=^#(STATD_PORT=.*))
skipping: [br014.bridges.psc.edu] => (item=^#(LOCKD_TCPPORT=.*))
skipping: [br014.bridges.psc.edu] => (item=^#(LOCKD_UDPPORT=.*))
TASK [gluster.features/roles/nfs_ganesha : Check packages installed, if not
install] ***
skipping: [br014.bridges.psc.edu] => (item=glusterfs-ganesha)
skipping: [br014.bridges.psc.edu] => (item=nfs-ganesha)
skipping: [br014.bridges.psc.edu] => (item=corosync)
skipping: [br014.bridges.psc.edu] => (item=pacemaker)
skipping: [br014.bridges.psc.edu] => (item=libntirpc)
skipping: [br014.bridges.psc.edu] => (item=pcs)
TASK [gluster.features/roles/nfs_ganesha : Restart services]
*******************
skipping: [br014.bridges.psc.edu] => (item=nfslock)
skipping: [br014.bridges.psc.edu] => (item=nfs-config)
skipping: [br014.bridges.psc.edu] => (item=rpc-statd)
TASK [gluster.features/roles/nfs_ganesha : Stop services]
**********************
skipping: [br014.bridges.psc.edu] => (item=nfs-server)
TASK [gluster.features/roles/nfs_ganesha : Disable service]
********************
skipping: [br014.bridges.psc.edu] => (item=nfs-server)
TASK [gluster.features/roles/nfs_ganesha : Enable services]
********************
skipping: [br014.bridges.psc.edu] => (item=glusterfssharedstorage)
skipping: [br014.bridges.psc.edu] => (item=nfs-ganesha)
skipping: [br014.bridges.psc.edu] => (item=network)
skipping: [br014.bridges.psc.edu] => (item=pcsd)
skipping: [br014.bridges.psc.edu] => (item=pacemaker)
TASK [gluster.features/roles/nfs_ganesha : Start services]
*********************
skipping: [br014.bridges.psc.edu] => (item=network)
skipping: [br014.bridges.psc.edu] => (item=pcsd)
TASK [gluster.features/roles/nfs_ganesha : Create a user hacluster if not
already present] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Set the password for hacluster]
*****
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Set the hacluster user the same
password on new nodes] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Pcs cluster authenticate the
hacluster on new nodes] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Pause for a few seconds after
pcs auth] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Set gluster_use_execmem flag on
and keep it persistent] ***
skipping: [br014.bridges.psc.edu] => (item=gluster_use_execmem)
skipping: [br014.bridges.psc.edu] => (item=ganesha_use_fusefs)
TASK [gluster.features/roles/nfs_ganesha : check if
gluster_features_ganesha_masternode is set] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy the ssh keys to the local
machine] ***
skipping: [br014.bridges.psc.edu] => (item=secret.pem.pub)
skipping: [br014.bridges.psc.edu] => (item=secret.pem)
TASK [gluster.features/roles/nfs_ganesha : check if
gluster_features_ganesha_newnodes_vip is set] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy the public key to remote
nodes] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy the private key to remote
node] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Deploy the pubkey on all nodes]
*****
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Make the volume a gluster shared
volume] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Generate ssh key in one of the
nodes in HA cluster] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy the ssh keys to the local
machine] ***
skipping: [br014.bridges.psc.edu] => (item=secret.pem.pub)
skipping: [br014.bridges.psc.edu] => (item=secret.pem)
TASK [gluster.features/roles/nfs_ganesha : Create configuration directory
for nfs_ganesha] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy ganesha.conf to config
directory on shared volume] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Create ganesha-ha.conf file]
********
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Enable NFS Ganesha]
*****************
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Pause for 30 seconds (takes a
while to enable NFS Ganesha)] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Check NFS Ganesha status]
***********
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Report NFS Ganesha status]
**********
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Report NFS Ganesha status (If
any errors)] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : check if
gluster_features_ganesha_volume is set] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Export the NFS Ganesha volume]
******
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy the public key to remote
nodes] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Copy the private key to remote
node] ***
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Deploy the pubkey on all nodes]
*****
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Adds a node to the cluster]
*********
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Report ganesha add-node status]
*****
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/nfs_ganesha : Delete the temporary directory]
*****
skipping: [br014.bridges.psc.edu]
TASK [gluster.features/roles/gluster_hci : Check if packages are installed,
if not install] ***
ok: [br014.bridges.psc.edu] => (item=vdsm)
ok: [br014.bridges.psc.edu] => (item=vdsm-gluster)
ok: [br014.bridges.psc.edu] => (item=ovirt-host)
ok: [br014.bridges.psc.edu] => (item=screen)
TASK [gluster.features/roles/gluster_hci : Enable and start glusterd and
chronyd] ***
ok: [br014.bridges.psc.edu] => (item=chronyd)
ok: [br014.bridges.psc.edu] => (item=glusterd)
ok: [br014.bridges.psc.edu] => (item=firewalld)
TASK [gluster.features/roles/gluster_hci : Add user qemu to gluster group]
*****
ok: [br014.bridges.psc.edu]
TASK [gluster.features/roles/gluster_hci : Disable the hook scripts]
***********
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh)
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh)
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh)
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1/reset/post/S31ganesha-reset.sh)
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1//start/post/S31ganesha-start.sh)
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh)
changed: [br014.bridges.psc.edu] =>
(item=/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh)
TASK [gluster.features/roles/gluster_hci : Check if valid FQDN is provided]
****
changed: [br014.bridges.psc.edu -> localhost] => (item=br014.bridges.psc.edu
)
TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
disk space] ***
fatal: [br014.bridges.psc.edu]: FAILED! => {"changed": true, "cmd": "df -m
/var/log | awk '/[0-9]%/ {print $4}'", "delta": "0:00:00.008513", "end":
"2019-05-09 20:09:27.914400", "failed_when_result": true, "rc": 0, "start":
"2019-05-09 20:09:27.905887", "stderr": "", "stderr_lines": [], "stdout":
"7470", "stdout_lines": ["7470"]}
NO MORE HOSTS LEFT
*************************************************************
NO MORE HOSTS LEFT
*************************************************************
to retry, use: --limit
@/usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.retry
PLAY RECAP
*********************************************************************
br014.bridges.psc.edu : ok=25 changed=3 unreachable=0
failed=1
5 years, 10 months
Gluster Snapshot Datepicker Not Working?
by Alex McWhirter
Updated to 4.3.3.7, the date picker for gluster snapshot appears to not
be working? It wont register clicks, and manually typing in times
doesn't work.
Can anyone else confirm?
5 years, 10 months