libgfapi reintegration?
by Darrell Budic
Any plans or timeline to re-integrate full libgfapi support? From fismonce’s previous work, it appears that all the pieces are now in place, but his builds are now behind the current release versions of vdsmd…
Thanks!
-Darrell
9 years, 10 months
change network MTU settings without taking all the VMs down?
by Darrell Budic
I finally got a couple of networks our from behind a wan based layer 2 bridge that required me to run at MTU 1448, and would like to get back up to MTU 1500. I see the GUI won’t let me do that while the network is in use. Any way around this, clean or otherwise? Restarting VMs to update them is ok, just trying to avoid having to take everything down at the same time.
-Darrell
9 years, 10 months
Does my Storage Domain crashed or is this iSCSI LUN's a problem?
by shimano
Hi guys,
I'm trying to run one of my storage domains, which experienced failure.
Unfortunately, I meet a very nasty error (Storage domain does not exist).
Could someone tell me, how to try to restore this domain?
P.S.
It's an oVirt 3.4.2-1.el6
**********************************************************************************
/var/log/messages:
Jan 26 12:48:49 node002 vdsm TaskManager.Task ERROR
Task=`10d02993-b585-448f-9a50-bd3e8cda7082`::Unexpected error#012Traceback
(most recent call last):#012 File "/usr/share/vdsm/storage/task.py", line
873, in _run#012 return fn(*args, **kargs)#012 File
"/usr/share/vdsm/logUtils.py", line 45, in wrapper#012 res = f(*args,
**kwargs)#012 File "/usr/share/vdsm/storage/hsm.py", line 2959, in
getVGInfo#012 return dict(info=self.__getVGsInfo([vgUUID])[0])#012 File
"/usr/share/vdsm/storage/hsm.py", line 2892, in __getVGsInfo#012 vgList
= [lvm.getVGbyUUID(vgUUID) for vgUUID in vgUUIDs]#012 File
"/usr/share/vdsm/storage/lvm.py", line 894, in getVGbyUUID#012 raise
se.VolumeGroupDoesNotExist("vg_uuid: %s" %
vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist: ('vg_uuid:
gyaCWf-6VKi-lI9W-JT6H-IZdy-rIsB-hTvZ4O',)
Jan 26 12:48:49 node002 kernel: device-mapper: table: 253:26: multipath:
error getting device
Jan 26 12:48:49 node002 kernel: device-mapper: ioctl: error adding target
to table
**********************************************************************************
/var/log/vdsm.log:
Thread-22::ERROR::2015-01-26
12:43:03,376::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
Thread-22::ERROR::2015-01-26
12:43:03,377::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
Thread-22::DEBUG::2015-01-26
12:43:03,377::lvm::373::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' got the operation mutex
Thread-22::DEBUG::2015-01-26
12:43:03,378::lvm::296::Storage.Misc.excCmd::(cmd) u'/usr/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
\'a|/dev/mapper/mpathb|/dev/mapper/mpathc|/dev/mapper/mpathd|/dev/mapper/mpathe|/dev/mapper/mpathf|\',
\'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
db52e9cb-7306-43fd-aff3-20831bc2bcaf' (cwd None)
Thread-22::DEBUG::2015-01-26
12:43:03,462::lvm::296::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
/dev/mapper/mpathc: Checksum error\n /dev/mapper/mpathc: Checksum error\n
Volume group "db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found\n Skipping
volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf\n'; <rc> = 5
Thread-22::WARNING::2015-01-26
12:43:03,466::lvm::378::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
/dev/mapper/mpathc: Checksum error', ' /dev/mapper/mpathc: Checksum
error', ' Volume group "db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found',
' Skipping volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf']
Thread-22::DEBUG::2015-01-26
12:43:03,466::lvm::415::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' released the operation mutex
Thread-22::ERROR::2015-01-26
12:43:03,477::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
db52e9cb-7306-43fd-aff3-20831bc2bcaf not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)
Thread-22::ERROR::2015-01-26
12:43:03,478::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
monitoring information
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in
_monitorDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)
Thread-13::DEBUG::2015-01-26
12:43:05,102::task::595::TaskManager.Task::(_updateState)
Task=`b4e85e37-b216-4d29-a448-0711e370a246`::moving from state init ->
state preparing
Thread-13::INFO::2015-01-26
12:43:05,102::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-13::INFO::2015-01-26
12:43:05,103::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {u'7969d636-1a02-42ba-a50b-2528765cf3d5':
{'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000457574',
'lastCheck': '7.5', 'valid': True},
u'5e1ca1b6-4706-4c79-8924-b8db741c929f': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.00100094', 'lastCheck': '6.3', 'valid':
True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.463061', 'lastCheck': '4.9', 'valid': True},
u'db52e9cb-7306-43fd-aff3-20831bc2bcaf': {'code': 358, 'version': -1,
'acquired': False, 'delay': '0', 'lastCheck': '1.6', 'valid': False},
u'5f595801-aaa5-42c7-b829-7a34a636407e': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.000942979', 'lastCheck': '7.9', 'valid':
True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0, 'version': 0,
'acquired': True, 'delay': '0.000424499', 'lastCheck': '7.3', 'valid':
True}}
Thread-13::DEBUG::2015-01-26
12:43:05,103::task::1185::TaskManager.Task::(prepare)
Task=`b4e85e37-b216-4d29-a448-0711e370a246`::finished:
{u'7969d636-1a02-42ba-a50b-2528765cf3d5': {'code': 0, 'version': 0,
'acquired': True, 'delay': '0.000457574', 'lastCheck': '7.5', 'valid':
True}, u'5e1ca1b6-4706-4c79-8924-b8db741c929f': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.00100094', 'lastCheck': '6.3', 'valid':
True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.463061', 'lastCheck': '4.9', 'valid': True},
u'db52e9cb-7306-43fd-aff3-20831bc2bcaf': {'code': 358, 'version': -1,
'acquired': False, 'delay': '0', 'lastCheck': '1.6', 'valid': False},
u'5f595801-aaa5-42c7-b829-7a34a636407e': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.000942979', 'lastCheck': '7.9', 'valid':
True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0, 'version': 0,
'acquired': True, 'delay': '0.000424499', 'lastCheck': '7.3', 'valid':
True}}
**********************************************************************************
[root@node002 shim]# multipath -ll
mpathe (1NODE_001_LUN01) dm-6 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 21:0:0:1 sdg 8:96 active ready running
mpathd (1NODE_003_LUN01) dm-7 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 20:0:0:1 sdf 8:80 active ready running
mpathc (1NODE_002_LUN01) dm-4 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 18:0:0:1 sdd 8:48 active ready running
mpathb (1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010) dm-1 ATA,MARVELL Raid VD
size=1.8T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:0:0:0 sda 8:0 active ready running
mpathf (1MANAGER_LUN01) dm-5 SHIMI,VIRTUAL-DISK
size=500G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 19:0:0:1 sde 8:64 active ready running
**********************************************************************************
[root@node002 shim]# lsblk
NAME MAJ:MIN RM
SIZE RO TYPE MOUNTPOINT
sdb 8:16 0
298.1G 0 disk
├─sdb1 8:17
0 1G 0 part /boot
├─sdb2 8:18
0 4G 0 part [SWAP]
└─sdb3 8:19 0
293.1G 0 part
└─vg_node002-LogVol00 (dm-0) 253:0 0
293.1G 0 lvm /
sda 8:0 0
1.8T 0 disk
└─sda1 8:1 0
1.8T 0 part
sdd 8:48 0
976.6G 0 disk
└─mpathc (dm-4) 253:4 0
976.6G 0 mpath
sde 8:64 0
500G 0 disk
└─mpathf (dm-5) 253:5 0
500G 0 mpath
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-metadata (dm-15) 253:15 0
512M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-ids (dm-16) 253:16 0
128M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-leases (dm-18) 253:18
0 2G 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-outbox (dm-20) 253:20 0
128M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-inbox (dm-21) 253:21 0
128M 0 lvm
└─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-master (dm-22) 253:22
0 1G 0 lvm
sdf 8:80 0
976.6G 0 disk
└─mpathd (dm-7) 253:7 0
976.6G 0 mpath
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-metadata (dm-14) 253:14 0
512M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-ids (dm-17) 253:17 0
128M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-leases (dm-19) 253:19
0 2G 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-outbox (dm-23) 253:23 0
128M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-inbox (dm-24) 253:24 0
128M 0 lvm
└─5e1ca1b6--4706--4c79--8924--b8db741c929f-master (dm-25) 253:25
0 1G 0 lvm
sdg 8:96 0
976.6G 0 disk
└─mpathe (dm-6) 253:6 0
976.6G 0 mpath
├─5f595801--aaa5--42c7--b829--7a34a636407e-metadata (dm-8) 253:8 0
512M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-ids (dm-9) 253:9 0
128M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-leases (dm-10) 253:10
0 2G 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-outbox (dm-11) 253:11 0
128M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-inbox (dm-12) 253:12 0
128M 0 lvm
└─5f595801--aaa5--42c7--b829--7a34a636407e-master (dm-13) 253:13
0 1G 0 lvm
**********************************************************************************
[root@node002 shim]# multipath -v3
Jan 26 12:46:28 | ram0: device node name blacklisted
Jan 26 12:46:28 | ram1: device node name blacklisted
Jan 26 12:46:28 | ram2: device node name blacklisted
Jan 26 12:46:28 | ram3: device node name blacklisted
Jan 26 12:46:28 | ram4: device node name blacklisted
Jan 26 12:46:28 | ram5: device node name blacklisted
Jan 26 12:46:28 | ram6: device node name blacklisted
Jan 26 12:46:28 | ram7: device node name blacklisted
Jan 26 12:46:28 | ram8: device node name blacklisted
Jan 26 12:46:28 | ram9: device node name blacklisted
Jan 26 12:46:28 | ram10: device node name blacklisted
Jan 26 12:46:28 | ram11: device node name blacklisted
Jan 26 12:46:28 | ram12: device node name blacklisted
Jan 26 12:46:28 | ram13: device node name blacklisted
Jan 26 12:46:28 | ram14: device node name blacklisted
Jan 26 12:46:28 | ram15: device node name blacklisted
Jan 26 12:46:28 | loop0: device node name blacklisted
Jan 26 12:46:28 | loop1: device node name blacklisted
Jan 26 12:46:28 | loop2: device node name blacklisted
Jan 26 12:46:28 | loop3: device node name blacklisted
Jan 26 12:46:28 | loop4: device node name blacklisted
Jan 26 12:46:28 | loop5: device node name blacklisted
Jan 26 12:46:28 | loop6: device node name blacklisted
Jan 26 12:46:28 | loop7: device node name blacklisted
Jan 26 12:46:28 | sdb: not found in pathvec
Jan 26 12:46:28 | sdb: mask = 0x3f
Jan 26 12:46:28 | sdb: dev_t = 8:16
Jan 26 12:46:28 | sdb: size = 625142448
Jan 26 12:46:28 | sdb: subsystem = scsi
Jan 26 12:46:28 | sdb: vendor = ATA
Jan 26 12:46:28 | sdb: product = WDC WD3200AAJS-6
Jan 26 12:46:28 | sdb: rev = 03.0
Jan 26 12:46:28 | sdb: h:b:t:l = 10:0:0:0
Jan 26 12:46:28 | sdb: serial = WD-WMAV2HM46197
Jan 26 12:46:28 | sdb: get_state
Jan 26 12:46:28 | sdb: path checker = directio (config file default)
Jan 26 12:46:28 | sdb: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdb: state = 3
Jan 26 12:46:28 | sdb: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdb: uid = 1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197
(callout)
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | sdb: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdb: prio = const (config file default)
Jan 26 12:46:28 | sdb: const prio = 1
Jan 26 12:46:28 | sda: not found in pathvec
Jan 26 12:46:28 | sda: mask = 0x3f
Jan 26 12:46:28 | sda: dev_t = 8:0
Jan 26 12:46:28 | sda: size = 3904897024
Jan 26 12:46:28 | sda: subsystem = scsi
Jan 26 12:46:28 | sda: vendor = ATA
Jan 26 12:46:28 | sda: product = MARVELL Raid VD
Jan 26 12:46:28 | sda: rev = MV.R
Jan 26 12:46:28 | sda: h:b:t:l = 0:0:0:0
Jan 26 12:46:28 | sda: serial = 1c3c8ecf5cf00010
Jan 26 12:46:28 | sda: get_state
Jan 26 12:46:28 | sda: path checker = directio (config file default)
Jan 26 12:46:28 | sda: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sda: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sda: state = 3
Jan 26 12:46:28 | sda: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sda: uid = 1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010
(callout)
Jan 26 12:46:28 | sda: state = running
Jan 26 12:46:28 | sda: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sda: prio = const (config file default)
Jan 26 12:46:28 | sda: const prio = 1
Jan 26 12:46:28 | dm-0: device node name blacklisted
Jan 26 12:46:28 | sdc: not found in pathvec
Jan 26 12:46:28 | sdc: mask = 0x3f
Jan 26 12:46:28 | sdc: dev_t = 8:32
Jan 26 12:46:28 | sdc: size = 0
Jan 26 12:46:28 | sdc: subsystem = scsi
Jan 26 12:46:28 | sdc: vendor = Multi
Jan 26 12:46:28 | sdc: product = Flash Reader
Jan 26 12:46:28 | sdc: rev = 1.00
Jan 26 12:46:28 | sdc: h:b:t:l = 12:0:0:0
Jan 26 12:46:28 | dm-1: device node name blacklisted
Jan 26 12:46:28 | dm-2: device node name blacklisted
Jan 26 12:46:28 | dm-3: device node name blacklisted
Jan 26 12:46:28 | sdd: not found in pathvec
Jan 26 12:46:28 | sdd: mask = 0x3f
Jan 26 12:46:28 | sdd: dev_t = 8:48
Jan 26 12:46:28 | sdd: size = 2048000000
Jan 26 12:46:28 | sdd: subsystem = scsi
Jan 26 12:46:28 | sdd: vendor = SHIMI
Jan 26 12:46:28 | sdd: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdd: rev = 0001
Jan 26 12:46:28 | sdd: h:b:t:l = 18:0:0:1
Jan 26 12:46:28 | sdd: tgt_node_name = pl.mycomp.shimi:node002.target0
Jan 26 12:46:28 | sdd: serial = beaf11
Jan 26 12:46:28 | sdd: get_state
Jan 26 12:46:28 | sdd: path checker = directio (config file default)
Jan 26 12:46:28 | sdd: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdd: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdd: state = 3
Jan 26 12:46:28 | sdd: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdd: uid = 1NODE_002_LUN01 (callout)
Jan 26 12:46:28 | sdd: state = running
Jan 26 12:46:28 | sdd: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdd: prio = const (config file default)
Jan 26 12:46:28 | sdd: const prio = 1
Jan 26 12:46:28 | dm-4: device node name blacklisted
Jan 26 12:46:28 | sde: not found in pathvec
Jan 26 12:46:28 | sde: mask = 0x3f
Jan 26 12:46:28 | sde: dev_t = 8:64
Jan 26 12:46:28 | sde: size = 1048576000
Jan 26 12:46:28 | sde: subsystem = scsi
Jan 26 12:46:28 | sde: vendor = SHIMI
Jan 26 12:46:28 | sde: product = VIRTUAL-DISK
Jan 26 12:46:28 | sde: rev = 0001
Jan 26 12:46:28 | sde: h:b:t:l = 19:0:0:1
Jan 26 12:46:28 | sde: tgt_node_name = pl.mycomp.shimi:manager.target0
Jan 26 12:46:28 | sde: serial = beaf11
Jan 26 12:46:28 | sde: get_state
Jan 26 12:46:28 | sde: path checker = directio (config file default)
Jan 26 12:46:28 | sde: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sde: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sde: state = 3
Jan 26 12:46:28 | sde: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sde: uid = 1MANAGER_LUN01 (callout)
Jan 26 12:46:28 | sde: state = running
Jan 26 12:46:28 | sde: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sde: prio = const (config file default)
Jan 26 12:46:28 | sde: const prio = 1
Jan 26 12:46:28 | sdf: not found in pathvec
Jan 26 12:46:28 | sdf: mask = 0x3f
Jan 26 12:46:28 | sdf: dev_t = 8:80
Jan 26 12:46:28 | sdf: size = 2048000000
Jan 26 12:46:28 | sdf: subsystem = scsi
Jan 26 12:46:28 | sdf: vendor = SHIMI
Jan 26 12:46:28 | sdf: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdf: rev = 0001
Jan 26 12:46:28 | sdf: h:b:t:l = 20:0:0:1
Jan 26 12:46:28 | sdf: tgt_node_name = pl.mycomp.shimi:node003.target0
Jan 26 12:46:28 | sdf: serial = beaf11
Jan 26 12:46:28 | sdf: get_state
Jan 26 12:46:28 | sdf: path checker = directio (config file default)
Jan 26 12:46:28 | sdf: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdf: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdf: state = 3
Jan 26 12:46:28 | sdf: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdf: uid = 1NODE_003_LUN01 (callout)
Jan 26 12:46:28 | sdf: state = running
Jan 26 12:46:28 | sdf: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdf: prio = const (config file default)
Jan 26 12:46:28 | sdf: const prio = 1
Jan 26 12:46:28 | sdg: not found in pathvec
Jan 26 12:46:28 | sdg: mask = 0x3f
Jan 26 12:46:28 | sdg: dev_t = 8:96
Jan 26 12:46:28 | sdg: size = 2048000000
Jan 26 12:46:28 | sdg: subsystem = scsi
Jan 26 12:46:28 | sdg: vendor = SHIMI
Jan 26 12:46:28 | sdg: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdg: rev = 0001
Jan 26 12:46:28 | sdg: h:b:t:l = 21:0:0:1
Jan 26 12:46:28 | sdg: tgt_node_name = pl.mycomp.shimi:node001.target0
Jan 26 12:46:28 | sdg: serial = beaf11
Jan 26 12:46:28 | sdg: get_state
Jan 26 12:46:28 | sdg: path checker = directio (config file default)
Jan 26 12:46:28 | sdg: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdg: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdg: state = 3
Jan 26 12:46:28 | sdg: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdg: uid = 1NODE_001_LUN01 (callout)
Jan 26 12:46:28 | sdg: state = running
Jan 26 12:46:28 | sdg: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdg: prio = const (config file default)
Jan 26 12:46:28 | sdg: const prio = 1
Jan 26 12:46:28 | dm-5: device node name blacklisted
Jan 26 12:46:28 | dm-6: device node name blacklisted
Jan 26 12:46:28 | dm-7: device node name blacklisted
Jan 26 12:46:28 | dm-8: device node name blacklisted
Jan 26 12:46:28 | dm-9: device node name blacklisted
Jan 26 12:46:28 | dm-10: device node name blacklisted
Jan 26 12:46:28 | dm-11: device node name blacklisted
Jan 26 12:46:28 | dm-12: device node name blacklisted
Jan 26 12:46:28 | dm-13: device node name blacklisted
Jan 26 12:46:28 | dm-14: device node name blacklisted
Jan 26 12:46:28 | dm-15: device node name blacklisted
Jan 26 12:46:28 | dm-16: device node name blacklisted
Jan 26 12:46:28 | dm-17: device node name blacklisted
Jan 26 12:46:28 | dm-18: device node name blacklisted
Jan 26 12:46:28 | dm-19: device node name blacklisted
Jan 26 12:46:28 | dm-20: device node name blacklisted
Jan 26 12:46:28 | dm-21: device node name blacklisted
Jan 26 12:46:28 | dm-22: device node name blacklisted
Jan 26 12:46:28 | dm-23: device node name blacklisted
Jan 26 12:46:28 | dm-24: device node name blacklisted
Jan 26 12:46:28 | dm-25: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st
chk_st
1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197 10:0:0:0 sdb 8:16 1 undef
ready
1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010 0:0:0:0 sda 8:0 1 undef
ready
12:0:0:0 sdc 8:32 -1 undef
faulty
1NODE_002_LUN01 18:0:0:1 sdd 8:48 1 undef
ready
1MANAGER_LUN01 19:0:0:1 sde 8:64 1 undef
ready
1NODE_003_LUN01 20:0:0:1 sdf 8:80 1 undef
ready
1NODE_001_LUN01 21:0:0:1 sdg 8:96 1 undef
ready
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:96 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:96 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:80 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:80 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:48 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:48 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:0 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:0 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:64 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:64 A 0
Jan 26 12:46:28 | Found matching wwid
[1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197] in bindings file. Setting
alias to mpatha
Jan 26 12:46:28 | sdb: ownership set to mpatha
Jan 26 12:46:28 | sdb: not found in pathvec
Jan 26 12:46:28 | sdb: mask = 0xc
Jan 26 12:46:28 | sdb: get_state
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdb: state = 3
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | sdb: const prio = 1
Jan 26 12:46:28 | mpatha: pgfailover = -1 (internal default)
Jan 26 12:46:28 | mpatha: pgpolicy = failover (internal default)
Jan 26 12:46:28 | mpatha: selector = round-robin 0 (internal default)
Jan 26 12:46:28 | mpatha: features = 0 (internal default)
Jan 26 12:46:28 | mpatha: hwhandler = 0 (internal default)
Jan 26 12:46:28 | mpatha: rr_weight = 1 (internal default)
Jan 26 12:46:28 | mpatha: minio = 1 rq (config file default)
Jan 26 12:46:28 | mpatha: no_path_retry = -1 (config file default)
Jan 26 12:46:28 | pg_timeout = NONE (internal default)
Jan 26 12:46:28 | mpatha: fast_io_fail_tmo = 5 (config file default)
Jan 26 12:46:28 | mpatha: dev_loss_tmo = 30 (config file default)
Jan 26 12:46:28 | mpatha: retain_attached_hw_handler = 1 (config file
default)
Jan 26 12:46:28 | failed to find rport_id for target10:0:0
Jan 26 12:46:28 | mpatha: set ACT_CREATE (map does not exist)
Jan 26 12:46:28 | mpatha: domap (0) failure for create/reload map
Jan 26 12:46:28 | mpatha: ignoring map
**********************************************************************************
[root@node002 shim]# iscsiadm -m session -o show
tcp: [6] 192.168.1.12:3260,1 pl.mycomp.shimi:node002.target0
tcp: [7] 192.168.1.11:3260,1 pl.mycomp.shimi:manager.target0
tcp: [8] 192.168.1.14:3260,1 pl.mycomp.shimi:node003.target0
tcp: [9] 192.168.1.13:3260,1 pl.mycomp.shimi:node001.target0
**********************************************************************************
[root@node002 shim]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
**********************************************************************************
[root@node002 shim]# sestatus
SELinux status: disabled
9 years, 10 months
Re: [ovirt-users] OVF Storage
by Wolfgang Bucher
This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_2geOeZjYyDNsLh4GGdd9CgZIaHgb7fR7hTiJohLQERXFor+w
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Thanks=0D=0A=0D=0A=0D=0A=0D=0Ai think you don't need the logs anymore=0D=0A=
=0D=0A=0D=0A=0D=0AGreetings=0D=0A=0D=0AWolfgang=0D=0A=0D=0A-----Urspr=C3=BC=
ngliche Nachricht-----=0D=0AVon: Gilad Chaplik=C2=A0<gchaplik(a)redhat.com>=
=0D=0AGesendet: Mit 28 Januar 2015 10:13=0D=0AAn: Maor Lipchuk <mlipchuk@=
redhat.com>=0D=0ACC: Wolfgang Bucher <wolfgang.bucher(a)netland-mn.de>; use=
rs(a)ovirt.org=0D=0ABetreff: Re: [ovirt-users] OVF Storage=0D=0A=0D=0A=0D=0A=
----- Original Message -----=0D=0A> From: "Maor Lipchuk" <mlipchuk@redhat=
=2Ecom>=0D=0A> To: "Wolfgang Bucher" <wolfgang.bucher(a)netland-mn.de>=0D=0A=
> Cc: users(a)ovirt.org=0D=0A> Sent: Wednesday, January 28, 2015 11:01:19 A=
M=0D=0A> Subject: Re: [ovirt-users] OVF Storage=0D=0A>=20=0D=0A>=20=0D=0A=
> ----- Original Message -----=0D=0A> > From: "Wolfgang Bucher" <wolfgang=
=2Ebucher(a)netland-mn.de>=0D=0A> > To: users(a)ovirt.org=0D=0A> > Sent: Wedn=
esday, January 28, 2015 9:01:45 AM=0D=0A> > Subject: Re: [ovirt-users] OV=
F Storage=0D=0A> >=20=0D=0A> > AW: [ovirt-users] OVF Storage=0D=0A> >=20=0D=
=0A> > Hello=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> > whi=
tch logs do you need. Attached a screenshot of the messages.=0D=0A>=20=0D=
=0A> Hi wolfgang,=0D=0A>=20=0D=0A> The engine.log from your engine machin=
e,=0D=0A> and the vdsm logs from your hosts should be enough for now.=0D=0A=
>=20=0D=0A> Thanks,=0D=0A> Maor=0D=0A=0D=0AI'm aware of this bug and it's=
reported by=20=0D=0Ahttps://bugzilla.redhat.com/1185615=0D=0A=0D=0A>=20=0D=
=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A=
> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> > -----Urspr=C3=BCnglic=
he Nachricht-----=0D=0A> > Von: Allon Mureinik <amureini(a)redhat.com>=0D=0A=
> > Gesendet: Die 27 Januar 2015 23:01=0D=0A> > An: Wolfgang Bucher <wolf=
gang.bucher(a)netland-mn.de>=0D=0A> > CC: users(a)ovirt.org; Liron Aravot <la=
ravot(a)redhat.com>; Gilad Chaplik=0D=0A> > <gchaplik(a)redhat.com>=0D=0A> > =
Betreff: Re: [ovirt-users] OVF Storage=0D=0A> >=20=0D=0A> > Wolfgang - ca=
n you attach the exact message please=3F and preferably the=0D=0A> > logs=
=3F=0D=0A> >=20=0D=0A> > Liron/Gilad - isn't this something we've fixed a=
lready=3F=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> > From: =
"Wolfgang Bucher" <wolfgang.bucher(a)netland-mn.de>=0D=0A> > To: users@ovir=
t.org=0D=0A> > Sent: Tuesday, January 27, 2015 10:23:46 PM=0D=0A> > Subje=
ct: [ovirt-users] OVF Storage=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> =
> Hello,=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> > i have =
create a new install of ovirt 3.51 on el7 with iscsi. All works fine=0D=0A=
> > but i get every hour a message : Failed to create OVF store disk for=0D=
=0A> > Storage=0D=0A> > Domain.=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A=
> >=20=0D=0A> > any ideas=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=
=0D=0A> > greetings=0D=0A> >=20=0D=0A> > Wolfgang Bucher=0D=0A> >=20=0D=0A=
> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A>=
>=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> >=20=0D=0A> > __________=
_____________________________________=0D=0A> > Users mailing list=0D=0A> =
> Users(a)ovirt.org=0D=0A> > http://lists.ovirt.org/mailman/listinfo/users=0D=
=0A> >=20=0D=0A> >=20=0D=0A> > __________________________________________=
_____=0D=0A> > Users mailing list=0D=0A> > Users(a)ovirt.org=0D=0A> > http:=
//lists.ovirt.org/mailman/listinfo/users=0D=0A> >=0D=0A> ________________=
_______________________________=0D=0A> Users mailing list=0D=0A> Users@ov=
irt.org=0D=0A> http://lists.ovirt.org/mailman/listinfo/users=0D=0A>=0D=0A=
=0D=0A
--=_2geOeZjYyDNsLh4GGdd9CgZIaHgb7fR7hTiJohLQERXFor+w
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://ww=
w.w3.org/TR/html4/loose.dtd"><html>=0A<head>=0A <meta name=3D"Generator"=
content=3D"Zarafa WebApp v7.1.10-44973">=0A <meta http-equiv=3D"Content=
-Type" content=3D"text/html; charset=3Dutf-8">=0A <title>AW: [ovirt-user=
s] OVF Storage</title>=0A</head>=0A<body>=0A<p style=3D"padding: 0; margi=
n: 0;"><span style=3D"font-family: tahoma; font-size: 12pt;">Thanks</span=
></p>=0A<p style=3D"padding: 0; margin: 0;"><span style=3D"font-family: t=
ahoma; font-size: 12pt;"><br /></span></p>=0A<p style=3D"padding: 0; marg=
in: 0;"><span style=3D"font-family: tahoma; font-size: 12pt;">i think you=
don't need the logs anymore</span></p>=0A<p style=3D"padding: 0; margin:=
0;"><span style=3D"font-family: tahoma; font-size: 12pt;"><br /></span><=
/p>=0A<p style=3D"padding: 0; margin: 0;"><span style=3D"font-family: tah=
oma; font-size: 12pt;">Greetings</span></p>=0A<p style=3D"padding: 0; mar=
gin: 0;"><span style=3D"font-family: tahoma; font-size: 12pt;">Wolfgang</=
span></p>=0A<div>=0A<blockquote style=3D"border-left: 2px solid #325FBA; =
padding-left: 5px; margin: 0px 5px;"><span style=3D"font-family: tahoma,a=
rial,helvetica,sans-serif; font-size: 10pt;">-----Ursprüngliche Nach=
richt-----<br /><span><strong>Von:</strong> Gilad Chaplik <gchapl=
ik(a)redhat.com></span><br /><span><strong>Gesendet:</strong> Mit 28 Jan=
uar 2015 10:13</span><br /><span><strong>An:</strong> Maor Lipchuk <ml=
ipchuk(a)redhat.com></span><br /><span><strong>CC:</strong> Wolfgang Buc=
her <wolfgang.bucher(a)netland-mn.de>; users(a)ovirt.org</span><br /><s=
pan><strong>Betreff:</strong> Re: [ovirt-users] OVF Storage</span><br /><=
br /></span>=0A<div>=0A<pre style=3D"white-space: pre-wrap; word-wrap: br=
eak-word;">----- Original Message -----<br />> From: "Maor Lipchuk" &l=
t;mlipchuk(a)redhat.com><br />> To: "Wolfgang Bucher" <wolfgang.bu=
cher(a)netland-mn.de><br />> Cc: users(a)ovirt.org<br />> Sent: Wedn=
esday, January 28, 2015 11:01:19 AM<br />> Subject: Re: [ovirt-users] =
OVF Storage<br />> <br />> <br />> ----- Original Message -----<=
br />> > From: "Wolfgang Bucher" <wolfgang.bucher(a)netland-mn.de&=
gt;<br />> > To: users(a)ovirt.org<br />> > Sent: Wednesday, Ja=
nuary 28, 2015 9:01:45 AM<br />> > Subject: Re: [ovirt-users] OVF S=
torage<br />> > <br />> > AW: [ovirt-users] OVF Storage<br />=
> > <br />> > Hello<br />> > <br />> > <br />>=
> <br />> > <br />> > whitch logs do you need. Attached a=
screenshot of the messages.<br />> <br />> Hi wolfgang,<br />> =
<br />> The engine.log from your engine machine,<br />> and the vds=
m logs from your hosts should be enough for now.<br />> <br />> Tha=
nks,<br />> Maor<br /><br />I'm aware of this bug and it's reported by=
<br />https://bugzilla.redhat.com/1185615<br /><br />> <br />> >=
; <br />> > <br />> > <br />> > <br />> > <br />&=
gt; > <br />> > <br />> > <br />> > <br />> > =
<br />> > -----Ursprüngliche Nachricht-----<br />> > Von=
: Allon Mureinik <amureini(a)redhat.com><br />> > Gesendet: Die=
27 Januar 2015 23:01<br />> > An: Wolfgang Bucher <wolfgang.buc=
her(a)netland-mn.de><br />> > CC: users(a)ovirt.org; Liron Aravot &l=
t;laravot(a)redhat.com>; Gilad Chaplik<br />> > <gchaplik@redha=
t.com><br />> > Betreff: Re: [ovirt-users] OVF Storage<br />>=
> <br />> > Wolfgang - can you attach the exact message please=3F=
and preferably the<br />> > logs=3F<br />> > <br />> >=
Liron/Gilad - isn't this something we've fixed already=3F<br />> >=
<br />> > <br />> > <br />> > <br />> > From: "W=
olfgang Bucher" <wolfgang.bucher(a)netland-mn.de><br />> > To: =
users(a)ovirt.org<br />> > Sent: Tuesday, January 27, 2015 10:23:46 P=
M<br />> > Subject: [ovirt-users] OVF Storage<br />> > <br />=
> > <br />> > <br />> > Hello,<br />> > <br />>=
; > <br />> > <br />> > <br />> > i have create a ne=
w install of ovirt 3.51 on el7 with iscsi. All works fine<br />> > =
but i get every hour a message : Failed to create OVF store disk for<br /=
>> > Storage<br />> > Domain.<br />> > <br />> > =
<br />> > <br />> > <br />> > any ideas<br />> > =
<br />> > <br />> > <br />> > <br />> > greetings=
<br />> > <br />> > Wolfgang Bucher<br />> > <br />>=
> <br />> > <br />> > <br />> > <br />> > <br=
/>> > <br />> > <br />> > <br />> > <br />> &=
gt; <br />> > <br />> > _____________________________________=
__________<br />> > Users mailing list<br />> > Users(a)ovirt.o=
rg<br />> > http://lists.ovirt.org/mailman/listinfo/users<br />>=
> <br />> > <br />> > ___________________________________=
____________<br />> > Users mailing list<br />> > Users@ovirt=
=2Eorg<br />> > http://lists.ovirt.org/mailman/listinfo/users<br />=
> ><br />> _______________________________________________<br />=
> Users mailing list<br />> Users(a)ovirt.org<br />> http://lists.=
ovirt.org/mailman/listinfo/users<br />><br /></pre>=0A</div>=0A</block=
quote>=0A</div>=0A</body>=0A</html>
--=_2geOeZjYyDNsLh4GGdd9CgZIaHgb7fR7hTiJohLQERXFor+w--
9 years, 10 months
MaxVmNameLengthNonWindows not set right ?
by Matt .
Hi,
My VM doesn' t want to start because the vdsm host says it's over 45
characters or so.
This is set:
# engine-config -g MaxVmNameLengthNonWindows
MaxVmNameLengthNonWindows: 64 version: general
But when I set it to 80 it also doesn't start.
What goes wrong here ?
It's a Linux VM.
Thanks,
Matt
9 years, 10 months
Re: [ovirt-users] change network MTU settings without taking all the VMs down?
by Martin Pavlík
Hi Darell,
you could switch the vNICs to different/empty network for a while. Or if possible you can just shut down the VMs that should do the trick as well.
HTH
Martin Pavlik
RHEV QE
> On 27 Jan 2015, at 21:59, Donny Davis <donny(a)cloudspin.me> wrote:
>
> I'm on the same.. And I see the issue. Why don't you create a new network with the correct parameters and then move the network the VM is attached to
>
> On Jan 27, 2015 1:39 PM, Darrell Budic <budic(a)onholyground.com> wrote:
>>
>> Try changing that custom MTU and hitting OK. I get:
>>
>> on 3.5 and 3.5.1. What version are you running?
>>
>>> On Jan 27, 2015, at 2:04 PM, Donny Davis <donny(a)cloudspin.me> wrote:
>>>
>>> Maybe I missed your question, but I can change the MTU from the gui without any problems. As long as you make sure there are not any vm's on the host you are trying to sync, I have experienced no issues.
>>>
>>> Donny
>>>
>>> -----Original Message-----
>>> From: Darrell Budic [mailto:budic@onholyground.com]
>>> Sent: Tuesday, January 27, 2015 12:30 PM
>>> To: Donny Davis
>>> Cc: users(a)ovirt.org
>>> Subject: Re: [ovirt-users] change network MTU settings without taking all the VMs down?
>>>
>>> Except you can’t change the network MTU setting in the first place on the network in the GUI. I’ve thought about doing it in the database, with a migration as you mention. Just checking first for better options :)
>>>
>>>
>>>> On Jan 27, 2015, at 12:08 PM, Donny Davis <donny(a)cloudspin.me> wrote:
>>>>
>>>> Migrate the vms between hosts, and when a host gets to no running VMS you can sync that network.
>>>>
>>>> Donny D
>>>> cloudspin.meOn Jan 27, 2015 10:43 AM, Darrell Budic <budic(a)onholyground.com> wrote:
>>>>>
>>>>>
>>>>> I finally got a couple of networks our from behind a wan based layer 2 bridge that required me to run at MTU 1448, and would like to get back up to MTU 1500. I see the GUI won’t let me do that while the network is in use. Any way around this, clean or otherwise? Restarting VMs to update them is ok, just trying to avoid having to take everything down at the same time.
>>>>>
>>>>> -Darrell
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>> <Capture.PNG>
>>
>>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
9 years, 10 months
gluster only hosts managed by oVirt?
by Jorick Astrego
This is a multi-part message in MIME format.
------------MIME-729234773-709046645-delim
Content-Type: text/plain;
charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Hi=2C
We are currently testing a setup with dedicated gluster servers and
dedicated compute nodes without disks=2E
For both we provision some custom os install and configuration=2E
Currently I am only able to manage and view gluster volumes from ovirt
when the =22Enable Gluster Service=22 is enabled for the Cluster=2E Right=
=3F
There is a =22Enable Virt Service=22 flag but the options is selected and=
greyed out=2E So I=27m not able to disable it=2E
Can I have gluster only hosts managed through the oVirt admin interface=2C=
without all the virtualization stuff installed on them=3F
Met vriendelijke groet=2C With kind regards=2C
Jorick Astrego
Netbulae Virtualization Experts=20
----------------
=09Tel=3A 053 20 30 270 =09info=40netbulae=2Eeu =09Staalsteden 4-3A =09KvK=
08198180
=09Fax=3A 053 20 30 271 =09www=2Enetbulae=2Eeu =097547 TA Enschede =09BTW=
NL821234584B01
----------------
------------MIME-729234773-709046645-delim
Content-Type: text/html;
charset="utf-8"
Content-Transfer-Encoding: quoted-printable
=3Chtml=3E
=3Cbody=3E
Hi, <br>
<br>
We are currently testing a setup with dedicated gluster servers and <br=
>
dedicated compute nodes without disks. <br>
<br>
For both we provision some custom os install and configuration. <br>
Currently I am only able to manage and view gluster volumes from ovirt =
<br>
when the "Enable Gluster Service" is enabled for the Cluster. Rig=
ht? <br>
<br>
There is a "Enable Virt Service" flag but the options is selected=
and <br>
greyed out. So I'm not able to disable it. <br>
<br>
Can I have gluster only hosts managed through the oVirt admin interface,=
3;<br>
without all the virtualization stuff installed on them? <br>
<br>
<br>
<br>
=
=3CBR /=3E
=3CBR /=3E
=3Cb style=3D=22color=3A=23604c78=22=3E=3C/b=3E=3Cbr=3E=3Cspan style=3D=22c=
olor=3A=23604c78=3B=22=3E=3Cfont color=3D=22000000=22=3E=3Cspan style=3D=22=
mso-fareast-language=3Aen-gb=3B=22 lang=3D=22NL=22=3EMet vriendelijke groet=
=2C With kind regards=2C=3Cbr=3E=3Cbr=3E=3C/span=3EJorick Astrego=3C/font=
=3E=3C/span=3E=3Cb style=3D=22color=3A=23604c78=22=3E=3Cbr=3E=3Cbr=3ENetbul=
ae Virtualization Experts =3C/b=3E=3Cbr=3E=3Chr style=3D=22border=3Anone=3B=
border-top=3A1px solid =23ccc=3B=22=3E=3Ctable style=3D=22width=3A 522px=22=
=3E=3Ctbody=3E=3Ctr=3E=3Ctd style=3D=22width=3A 130px=3Bfont-size=3A 10px=
=22=3ETel=3A 053 20 30 270=3C/td=3E =3Ctd style=3D=22width=3A 130px=3Bf=
ont-size=3A 10px=22=3Einfo=40netbulae=2Eeu=3C/td=3E =3Ctd style=3D=22wid=
th=3A 130px=3Bfont-size=3A 10px=22=3EStaalsteden 4-3A=3C/td=3E =3Ctd sty=
le=3D=22width=3A 130px=3Bfont-size=3A 10px=22=3EKvK 08198180=3C/td=3E=3C/tr=
=3E=3Ctr=3E =3Ctd style=3D=22width=3A 130px=3Bfont-size=3A 10px=22=3EFax=
=3A 053 20 30 271=3C/td=3E =3Ctd style=3D=22width=3A 130px=3Bfont-size=
=3A 10px=22=3Ewww=2Enetbulae=2Eeu=3C/td=3E =3Ctd style=3D=22width=3A 130=
px=3Bfont-size=3A 10px=22=3E7547 TA Enschede=3C/td=3E =3Ctd style=3D=22w=
idth=3A 130px=3Bfont-size=3A 10px=22=3EBTW NL821234584B01=3C/td=3E=3C/tr=3E=
=3C/tbody=3E=3C/table=3E=3Cbr=3E=3Chr style=3D=22border=3Anone=3Bborder-top=
=3A1px solid =23ccc=3B=22=3E=3CBR /=3E
=3C/body=3E
=3C/html=3E
------------MIME-729234773-709046645-delim--
9 years, 10 months
How to disable SSL for oVirt webadmin and userportal??
by greatboy fish
Dear Sir,
When I connecting to the http://ovirt_FQDN/ovirt-engine/ , then I can see
link of "User Portal" and "Administration Portal".
But when I click either of one link, it will be redirect to
http*s*://ovirt_FQDN/ovirt-engine/userportal/?locale=en_US#login
or http*s*://ovirt_FQDN/ovirt-engine/webadmin/?locale=en_US#login.
I don't want to connect via https, please teach me how to disable SSL for
oVirt webadmin and userportal(or not to redirect to https)??
9 years, 10 months
name of virtual machine and hostname
by nicola gentile
Good morning,
I would like to ask you an information.
After I have installed ovirt, I have created a pool of vm with name
like centos-?? (from 1 to 20)
and then ovirt generated 20 vm with name centos-1, centos-2, centos-3 etc. etc.
The problem is when the vm starts the hostname is not the same of the
vm name in ovirt but is the same name of the template.
Is it possible to make sure that the name of vm and the hostname is identical?
Best regard
Nicola Gentile
9 years, 10 months