Network config
by Koen Vanoppen
Hello everybody,
Just wanted to know if anybody else has the problem, that when you restart
your network configuration of a hypervisor, all the settings concerning the
bonding are gone...
Do I file a bug report for this or is this already a known issue?
Kind regards,
Koen
9 years, 11 months
Re: [ovirt-users] change network MTU settings without taking all the VMs down?
by Donny Davis
Migrate the vms between hosts, and when a host gets to no running VMS you can sync that network.
Donny D
cloudspin.meOn Jan 27, 2015 10:43 AM, Darrell Budic <budic(a)onholyground.com> wrote:
>
> I finally got a couple of networks our from behind a wan based layer 2 bridge that required me to run at MTU 1448, and would like to get back up to MTU 1500. I see the GUI won’t let me do that while the network is in use. Any way around this, clean or otherwise? Restarting VMs to update them is ok, just trying to avoid having to take everything down at the same time.
>
> -Darrell
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
9 years, 11 months
3.5.0 to 3.5.1 Upgrade Steps
by Tim Macy
What are the proper steps to upgrade the engine from 3.5.0.1-1.el6 to
3.5.1-1.el6?
engine-upgrade or engine-setup after yum update ovirt-engine-setup?
9 years, 11 months
Failed to add storage domain
by Koen Vanoppen
Dear all,
We have a small issue on our ovirt environment. When I try to add a fibre
storage pool,
I get the following error:
2015-01-21 08:24:48,705 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
(org.ovirt.thread.pool-8-thread-22) Storage domain
e5d59e58-6408-4f80-911e-a30d0e7ca1fe:BuranIsoDomain is not visible to one
or more hosts. Since the domains type is ISO, hosts status will not be
changed to non-operational
2015-01-21 08:24:54,764 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(ajp--127.0.0.1-8702-3) [2c666536] Command
CreateStorageDomainVDSCommand(HostName = saturnus1, HostId =
1180a1f6-635e-47f6-bba1-871d8c432de0,
storageDomain=StorageDomainStatic[StoragePoolOracle01,
fd6c6779-8353-42f6-b2ff-0c670e4b8a73],
args=qEZ3pE-03I3-5w9M-1XFN-ArBH-2d2e-fSVRF3) execution failed. Exception:
VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-21 08:24:54,769 ERROR
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-3) [2c666536] Command
org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR
and code 5022)
2015-01-21 08:24:54,810 ERROR
[org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
(ajp--127.0.0.1-8702-3) [2c666536] Transaction rolled-back for command:
org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand.
2015-01-21 08:24:54,839 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-3) [2c666536] Correlation ID: 2c666536, Job ID:
35bc8058-ab79-40b5-be1e-877b88362261, Call Stack: null, Custom Event ID:
-1, Message: Failed to add Storage Domain StoragePoolOracle01. (User: admin)
2015-01-21 08:24:55,107 WARN
[org.ovirt.engine.core.bll.AddVmFromScratchCommand]
(DefaultQuartzScheduler_Worker-25)
In the GUI you'll see this error:
Error while executing action New SAN Storage Domain: Network error during
communication with the Host.
All the hosts are up and all the rest of our storage is also up.
Kind regards,
Koen
9 years, 11 months
Update to 3.5.1 scrambled multipath.conf?
by Gianluca Cecchi
Hello,
on my all-in-one installation @home I had 3.5.0 with F20.
Today I updated to 3.5.1.
it seems it modified /etc/multipath.conf preventing me from using my second
disk at all...
My system has internal ssd disk (sda) for OS and one local storage domain
and another disk (sdb) with some partitions (on one of them there is also
another local storage domain).
At reboot I was put in emergency boot because partitions at sdb disk could
not be mounted (they were busy).
it took me some time to understand that the problem was due to sdb gone
managed as multipath device and so busy for partitions to be mounted.
Here you can find how multipath became after update and reboot
https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sha...
No device-mapper-multipath update in yum.log
Also it seems that after changing it, it was then reverted at boot again (I
don't know if the responsible was initrd/dracut or vdsmd) so in the mean
time the only thing I could do was to make the file immutable with
chattr +i /etc/multipath.conf
and so I was able to reboot and verify that my partitions on sdb were ok
and I was able to mount them (for safe I also ran an fsck against them)
Update ran around 19:20 and finished at 19:34
here the log in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sha...
Reboot was done around 21:10-21:14
Here my /var/log/messages in gzip format, where you can see latest days.
https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sha...
Any suggestion appreciated.
Current multipath.conf (where I also commented out the getuid_callout that
is not used anymore):
[root@tekkaman setup]# cat /etc/multipath.conf
# RHEV REVISION 1.1
blacklist {
devnode "^(sda|sdb)[0-9]*"
}
defaults {
polling_interval 5
#getuid_callout "/usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
}
Gianluca
9 years, 11 months
Does my Storage Domain crashed or is this iSCSI LUN's a problem?
by shimano
Hi guys,
I'm trying to run one of my storage domains, which experienced failure.
Unfortunately, I meet a very nasty error (Storage domain does not exist).
Could someone tell me, how to try to restore this domain?
P.S.
It's an oVirt 3.4.2-1.el6
******************************
****************************************************
/var/log/messages:
Jan 26 12:48:49 node002 vdsm TaskManager.Task ERROR
Task=`10d02993-b585-448f-9a50-bd3e8cda7082`::Unexpected error#012Traceback
(most recent call last):#012 File "/usr/share/vdsm/storage/task.py", line
873, in _run#012 return fn(*args, **kargs)#012 File
"/usr/share/vdsm/logUtils.py", line 45, in wrapper#012 res = f(*args,
**kwargs)#012 File "/usr/share/vdsm/storage/hsm.py", line 2959, in
getVGInfo#012 return dict(info=self.__getVGsInfo([vgUUID])[0])#012 File
"/usr/share/vdsm/storage/hsm.py", line 2892, in __getVGsInfo#012 vgList
= [lvm.getVGbyUUID(vgUUID) for vgUUID in vgUUIDs]#012 File
"/usr/share/vdsm/storage/lvm.py", line 894, in getVGbyUUID#012 raise
se.VolumeGroupDoesNotExist("vg_uuid: %s" %
vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist: ('vg_uuid:
gyaCWf-6VKi-lI9W-JT6H-IZdy-rIsB-hTvZ4O',)
Jan 26 12:48:49 node002 kernel: device-mapper: table: 253:26: multipath:
error getting device
Jan 26 12:48:49 node002 kernel: device-mapper: ioctl: error adding target
to table
**********************************************************************************
/var/log/vdsm.log:
Thread-22::ERROR::2015-01-26
12:43:03,376::sdc::137::Storage.StorageDomainCache::(_findDomain) looking
for unfetched domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
Thread-22::ERROR::2015-01-26
12:43:03,377::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
Thread-22::DEBUG::2015-01-26
12:43:03,377::lvm::373::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' got the operation mutex
Thread-22::DEBUG::2015-01-26
12:43:03,378::lvm::296::Storage.Misc.excCmd::(cmd) u'/usr/bin/sudo -n
/sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
obtain_device_list_from_udev=0 filter = [
\'a|/dev/mapper/mpathb|/dev/mapper/mpathc|/dev/mapper/mpathd|/dev/mapper/mpathe|/dev/mapper/mpathf|\',
\'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1
wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days =
0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
db52e9cb-7306-43fd-aff3-20831bc2bcaf' (cwd None)
Thread-22::DEBUG::2015-01-26
12:43:03,462::lvm::296::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
/dev/mapper/mpathc: Checksum error\n /dev/mapper/mpathc: Checksum error\n
Volume group "db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found\n Skipping
volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf\n'; <rc> = 5
Thread-22::WARNING::2015-01-26
12:43:03,466::lvm::378::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] ['
/dev/mapper/mpathc: Checksum error', ' /dev/mapper/mpathc: Checksum
error', ' Volume group "db52e9cb-7306-43fd-aff3-20831bc2bcaf" not found',
' Skipping volume group db52e9cb-7306-43fd-aff3-20831bc2bcaf']
Thread-22::DEBUG::2015-01-26
12:43:03,466::lvm::415::OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' released the operation mutex
Thread-22::ERROR::2015-01-26
12:43:03,477::sdc::143::Storage.StorageDomainCache::(_findDomain) domain
db52e9cb-7306-43fd-aff3-20831bc2bcaf not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)
Thread-22::ERROR::2015-01-26
12:43:03,478::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain)
Error while collecting domain db52e9cb-7306-43fd-aff3-20831bc2bcaf
monitoring information
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 204, in
_monitorDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'db52e9cb-7306-43fd-aff3-20831bc2bcaf',)
Thread-13::DEBUG::2015-01-26
12:43:05,102::task::595::TaskManager.Task::(_updateState)
Task=`b4e85e37-b216-4d29-a448-0711e370a246`::moving from state init ->
state preparing
Thread-13::INFO::2015-01-26
12:43:05,102::logUtils::44::dispatcher::(wrapper) Run and protect:
repoStats(options=None)
Thread-13::INFO::2015-01-26
12:43:05,103::logUtils::47::dispatcher::(wrapper) Run and protect:
repoStats, Return response: {u'7969d636-1a02-42ba-a50b-2528765cf3d5':
{'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000457574',
'lastCheck': '7.5', 'valid': True},
u'5e1ca1b6-4706-4c79-8924-b8db741c929f': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.00100094', 'lastCheck': '6.3', 'valid':
True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.463061', 'lastCheck': '4.9', 'valid': True},
u'db52e9cb-7306-43fd-aff3-20831bc2bcaf': {'code': 358, 'version': -1,
'acquired': False, 'delay': '0', 'lastCheck': '1.6', 'valid': False},
u'5f595801-aaa5-42c7-b829-7a34a636407e': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.000942979', 'lastCheck': '7.9', 'valid':
True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0, 'version': 0,
'acquired': True, 'delay': '0.000424499', 'lastCheck': '7.3', 'valid':
True}}
Thread-13::DEBUG::2015-01-26
12:43:05,103::task::1185::TaskManager.Task::(prepare)
Task=`b4e85e37-b216-4d29-a448-0711e370a246`::finished:
{u'7969d636-1a02-42ba-a50b-2528765cf3d5': {'code': 0, 'version': 0,
'acquired': True, 'delay': '0.000457574', 'lastCheck': '7.5', 'valid':
True}, u'5e1ca1b6-4706-4c79-8924-b8db741c929f': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.00100094', 'lastCheck': '6.3', 'valid':
True}, u'cb85e6cd-df54-4151-8f3b-7e6d72b7372d': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.463061', 'lastCheck': '4.9', 'valid': True},
u'db52e9cb-7306-43fd-aff3-20831bc2bcaf': {'code': 358, 'version': -1,
'acquired': False, 'delay': '0', 'lastCheck': '1.6', 'valid': False},
u'5f595801-aaa5-42c7-b829-7a34a636407e': {'code': 0, 'version': 3,
'acquired': True, 'delay': '0.000942979', 'lastCheck': '7.9', 'valid':
True}, u'c1ebd0f8-fa32-4fe3-8569-fb7d4ad8faf4': {'code': 0, 'version': 0,
'acquired': True, 'delay': '0.000424499', 'lastCheck': '7.3', 'valid':
True}}
**********************************************************************************
[root@node002 shim]# multipath -ll
mpathe (1NODE_001_LUN01) dm-6 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 21:0:0:1 sdg 8:96 active ready running
mpathd (1NODE_003_LUN01) dm-7 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 20:0:0:1 sdf 8:80 active ready running
mpathc (1NODE_002_LUN01) dm-4 SHIMI,VIRTUAL-DISK
size=977G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 18:0:0:1 sdd 8:48 active ready running
mpathb (1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010) dm-1 ATA,MARVELL Raid VD
size=1.8T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:0:0:0 sda 8:0 active ready running
mpathf (1MANAGER_LUN01) dm-5 SHIMI,VIRTUAL-DISK
size=500G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 19:0:0:1 sde 8:64 active ready running
**********************************************************************************
[root@node002 shim]# lsblk
NAME MAJ:MIN RM
SIZE RO TYPE MOUNTPOINT
sdb 8:16 0
298.1G 0 disk
├─sdb1 8:17
0 1G 0 part /boot
├─sdb2 8:18
0 4G 0 part [SWAP]
└─sdb3 8:19 0
293.1G 0 part
└─vg_node002-LogVol00 (dm-0) 253:0 0
293.1G 0 lvm /
sda 8:0 0
1.8T 0 disk
└─sda1 8:1 0
1.8T 0 part
sdd 8:48 0
976.6G 0 disk
└─mpathc (dm-4) 253:4 0
976.6G 0 mpath
sde 8:64 0
500G 0 disk
└─mpathf (dm-5) 253:5 0
500G 0 mpath
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-metadata (dm-15) 253:15 0
512M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-ids (dm-16) 253:16 0
128M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-leases (dm-18) 253:18
0 2G 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-outbox (dm-20) 253:20 0
128M 0 lvm
├─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-inbox (dm-21) 253:21 0
128M 0 lvm
└─cb85e6cd--df54--4151--8f3b--7e6d72b7372d-master (dm-22) 253:22
0 1G 0 lvm
sdf 8:80 0
976.6G 0 disk
└─mpathd (dm-7) 253:7 0
976.6G 0 mpath
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-metadata (dm-14) 253:14 0
512M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-ids (dm-17) 253:17 0
128M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-leases (dm-19) 253:19
0 2G 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-outbox (dm-23) 253:23 0
128M 0 lvm
├─5e1ca1b6--4706--4c79--8924--b8db741c929f-inbox (dm-24) 253:24 0
128M 0 lvm
└─5e1ca1b6--4706--4c79--8924--b8db741c929f-master (dm-25) 253:25
0 1G 0 lvm
sdg 8:96 0
976.6G 0 disk
└─mpathe (dm-6) 253:6 0
976.6G 0 mpath
├─5f595801--aaa5--42c7--b829--7a34a636407e-metadata (dm-8) 253:8 0
512M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-ids (dm-9) 253:9 0
128M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-leases (dm-10) 253:10
0 2G 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-outbox (dm-11) 253:11 0
128M 0 lvm
├─5f595801--aaa5--42c7--b829--7a34a636407e-inbox (dm-12) 253:12 0
128M 0 lvm
└─5f595801--aaa5--42c7--b829--7a34a636407e-master (dm-13) 253:13
0 1G 0 lvm
**********************************************************************************
[root@node002 shim]# multipath -v3
Jan 26 12:46:28 | ram0: device node name blacklisted
Jan 26 12:46:28 | ram1: device node name blacklisted
Jan 26 12:46:28 | ram2: device node name blacklisted
Jan 26 12:46:28 | ram3: device node name blacklisted
Jan 26 12:46:28 | ram4: device node name blacklisted
Jan 26 12:46:28 | ram5: device node name blacklisted
Jan 26 12:46:28 | ram6: device node name blacklisted
Jan 26 12:46:28 | ram7: device node name blacklisted
Jan 26 12:46:28 | ram8: device node name blacklisted
Jan 26 12:46:28 | ram9: device node name blacklisted
Jan 26 12:46:28 | ram10: device node name blacklisted
Jan 26 12:46:28 | ram11: device node name blacklisted
Jan 26 12:46:28 | ram12: device node name blacklisted
Jan 26 12:46:28 | ram13: device node name blacklisted
Jan 26 12:46:28 | ram14: device node name blacklisted
Jan 26 12:46:28 | ram15: device node name blacklisted
Jan 26 12:46:28 | loop0: device node name blacklisted
Jan 26 12:46:28 | loop1: device node name blacklisted
Jan 26 12:46:28 | loop2: device node name blacklisted
Jan 26 12:46:28 | loop3: device node name blacklisted
Jan 26 12:46:28 | loop4: device node name blacklisted
Jan 26 12:46:28 | loop5: device node name blacklisted
Jan 26 12:46:28 | loop6: device node name blacklisted
Jan 26 12:46:28 | loop7: device node name blacklisted
Jan 26 12:46:28 | sdb: not found in pathvec
Jan 26 12:46:28 | sdb: mask = 0x3f
Jan 26 12:46:28 | sdb: dev_t = 8:16
Jan 26 12:46:28 | sdb: size = 625142448
Jan 26 12:46:28 | sdb: subsystem = scsi
Jan 26 12:46:28 | sdb: vendor = ATA
Jan 26 12:46:28 | sdb: product = WDC WD3200AAJS-6
Jan 26 12:46:28 | sdb: rev = 03.0
Jan 26 12:46:28 | sdb: h:b:t:l = 10:0:0:0
Jan 26 12:46:28 | sdb: serial = WD-WMAV2HM46197
Jan 26 12:46:28 | sdb: get_state
Jan 26 12:46:28 | sdb: path checker = directio (config file default)
Jan 26 12:46:28 | sdb: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdb: state = 3
Jan 26 12:46:28 | sdb: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdb: uid = 1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197
(callout)
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | sdb: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdb: prio = const (config file default)
Jan 26 12:46:28 | sdb: const prio = 1
Jan 26 12:46:28 | sda: not found in pathvec
Jan 26 12:46:28 | sda: mask = 0x3f
Jan 26 12:46:28 | sda: dev_t = 8:0
Jan 26 12:46:28 | sda: size = 3904897024
Jan 26 12:46:28 | sda: subsystem = scsi
Jan 26 12:46:28 | sda: vendor = ATA
Jan 26 12:46:28 | sda: product = MARVELL Raid VD
Jan 26 12:46:28 | sda: rev = MV.R
Jan 26 12:46:28 | sda: h:b:t:l = 0:0:0:0
Jan 26 12:46:28 | sda: serial = 1c3c8ecf5cf00010
Jan 26 12:46:28 | sda: get_state
Jan 26 12:46:28 | sda: path checker = directio (config file default)
Jan 26 12:46:28 | sda: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sda: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sda: state = 3
Jan 26 12:46:28 | sda: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sda: uid = 1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010
(callout)
Jan 26 12:46:28 | sda: state = running
Jan 26 12:46:28 | sda: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sda: prio = const (config file default)
Jan 26 12:46:28 | sda: const prio = 1
Jan 26 12:46:28 | dm-0: device node name blacklisted
Jan 26 12:46:28 | sdc: not found in pathvec
Jan 26 12:46:28 | sdc: mask = 0x3f
Jan 26 12:46:28 | sdc: dev_t = 8:32
Jan 26 12:46:28 | sdc: size = 0
Jan 26 12:46:28 | sdc: subsystem = scsi
Jan 26 12:46:28 | sdc: vendor = Multi
Jan 26 12:46:28 | sdc: product = Flash Reader
Jan 26 12:46:28 | sdc: rev = 1.00
Jan 26 12:46:28 | sdc: h:b:t:l = 12:0:0:0
Jan 26 12:46:28 | dm-1: device node name blacklisted
Jan 26 12:46:28 | dm-2: device node name blacklisted
Jan 26 12:46:28 | dm-3: device node name blacklisted
Jan 26 12:46:28 | sdd: not found in pathvec
Jan 26 12:46:28 | sdd: mask = 0x3f
Jan 26 12:46:28 | sdd: dev_t = 8:48
Jan 26 12:46:28 | sdd: size = 2048000000
Jan 26 12:46:28 | sdd: subsystem = scsi
Jan 26 12:46:28 | sdd: vendor = SHIMI
Jan 26 12:46:28 | sdd: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdd: rev = 0001
Jan 26 12:46:28 | sdd: h:b:t:l = 18:0:0:1
Jan 26 12:46:28 | sdd: tgt_node_name = pl.mycomp.shimi:node002.target0
Jan 26 12:46:28 | sdd: serial = beaf11
Jan 26 12:46:28 | sdd: get_state
Jan 26 12:46:28 | sdd: path checker = directio (config file default)
Jan 26 12:46:28 | sdd: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdd: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdd: state = 3
Jan 26 12:46:28 | sdd: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdd: uid = 1NODE_002_LUN01 (callout)
Jan 26 12:46:28 | sdd: state = running
Jan 26 12:46:28 | sdd: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdd: prio = const (config file default)
Jan 26 12:46:28 | sdd: const prio = 1
Jan 26 12:46:28 | dm-4: device node name blacklisted
Jan 26 12:46:28 | sde: not found in pathvec
Jan 26 12:46:28 | sde: mask = 0x3f
Jan 26 12:46:28 | sde: dev_t = 8:64
Jan 26 12:46:28 | sde: size = 1048576000
Jan 26 12:46:28 | sde: subsystem = scsi
Jan 26 12:46:28 | sde: vendor = SHIMI
Jan 26 12:46:28 | sde: product = VIRTUAL-DISK
Jan 26 12:46:28 | sde: rev = 0001
Jan 26 12:46:28 | sde: h:b:t:l = 19:0:0:1
Jan 26 12:46:28 | sde: tgt_node_name = pl.mycomp.shimi:manager.target0
Jan 26 12:46:28 | sde: serial = beaf11
Jan 26 12:46:28 | sde: get_state
Jan 26 12:46:28 | sde: path checker = directio (config file default)
Jan 26 12:46:28 | sde: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sde: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sde: state = 3
Jan 26 12:46:28 | sde: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sde: uid = 1MANAGER_LUN01 (callout)
Jan 26 12:46:28 | sde: state = running
Jan 26 12:46:28 | sde: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sde: prio = const (config file default)
Jan 26 12:46:28 | sde: const prio = 1
Jan 26 12:46:28 | sdf: not found in pathvec
Jan 26 12:46:28 | sdf: mask = 0x3f
Jan 26 12:46:28 | sdf: dev_t = 8:80
Jan 26 12:46:28 | sdf: size = 2048000000
Jan 26 12:46:28 | sdf: subsystem = scsi
Jan 26 12:46:28 | sdf: vendor = SHIMI
Jan 26 12:46:28 | sdf: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdf: rev = 0001
Jan 26 12:46:28 | sdf: h:b:t:l = 20:0:0:1
Jan 26 12:46:28 | sdf: tgt_node_name = pl.mycomp.shimi:node003.target0
Jan 26 12:46:28 | sdf: serial = beaf11
Jan 26 12:46:28 | sdf: get_state
Jan 26 12:46:28 | sdf: path checker = directio (config file default)
Jan 26 12:46:28 | sdf: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdf: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdf: state = 3
Jan 26 12:46:28 | sdf: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdf: uid = 1NODE_003_LUN01 (callout)
Jan 26 12:46:28 | sdf: state = running
Jan 26 12:46:28 | sdf: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdf: prio = const (config file default)
Jan 26 12:46:28 | sdf: const prio = 1
Jan 26 12:46:28 | sdg: not found in pathvec
Jan 26 12:46:28 | sdg: mask = 0x3f
Jan 26 12:46:28 | sdg: dev_t = 8:96
Jan 26 12:46:28 | sdg: size = 2048000000
Jan 26 12:46:28 | sdg: subsystem = scsi
Jan 26 12:46:28 | sdg: vendor = SHIMI
Jan 26 12:46:28 | sdg: product = VIRTUAL-DISK
Jan 26 12:46:28 | sdg: rev = 0001
Jan 26 12:46:28 | sdg: h:b:t:l = 21:0:0:1
Jan 26 12:46:28 | sdg: tgt_node_name = pl.mycomp.shimi:node001.target0
Jan 26 12:46:28 | sdg: serial = beaf11
Jan 26 12:46:28 | sdg: get_state
Jan 26 12:46:28 | sdg: path checker = directio (config file default)
Jan 26 12:46:28 | sdg: checker timeout = 30000 ms (sysfs setting)
Jan 26 12:46:28 | sdg: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdg: state = 3
Jan 26 12:46:28 | sdg: getuid = /sbin/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n (config file default)
Jan 26 12:46:28 | sdg: uid = 1NODE_001_LUN01 (callout)
Jan 26 12:46:28 | sdg: state = running
Jan 26 12:46:28 | sdg: detect_prio = 1 (config file default)
Jan 26 12:46:28 | sdg: prio = const (config file default)
Jan 26 12:46:28 | sdg: const prio = 1
Jan 26 12:46:28 | dm-5: device node name blacklisted
Jan 26 12:46:28 | dm-6: device node name blacklisted
Jan 26 12:46:28 | dm-7: device node name blacklisted
Jan 26 12:46:28 | dm-8: device node name blacklisted
Jan 26 12:46:28 | dm-9: device node name blacklisted
Jan 26 12:46:28 | dm-10: device node name blacklisted
Jan 26 12:46:28 | dm-11: device node name blacklisted
Jan 26 12:46:28 | dm-12: device node name blacklisted
Jan 26 12:46:28 | dm-13: device node name blacklisted
Jan 26 12:46:28 | dm-14: device node name blacklisted
Jan 26 12:46:28 | dm-15: device node name blacklisted
Jan 26 12:46:28 | dm-16: device node name blacklisted
Jan 26 12:46:28 | dm-17: device node name blacklisted
Jan 26 12:46:28 | dm-18: device node name blacklisted
Jan 26 12:46:28 | dm-19: device node name blacklisted
Jan 26 12:46:28 | dm-20: device node name blacklisted
Jan 26 12:46:28 | dm-21: device node name blacklisted
Jan 26 12:46:28 | dm-22: device node name blacklisted
Jan 26 12:46:28 | dm-23: device node name blacklisted
Jan 26 12:46:28 | dm-24: device node name blacklisted
Jan 26 12:46:28 | dm-25: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st
chk_st
1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197 10:0:0:0 sdb 8:16 1 undef
ready
1ATA_MARVELL_Raid_VD_0_1c3c8ecf5cf00010 0:0:0:0 sda 8:0 1 undef
ready
12:0:0:0 sdc 8:32 -1 undef
faulty
1NODE_002_LUN01 18:0:0:1 sdd 8:48 1 undef
ready
1MANAGER_LUN01 19:0:0:1 sde 8:64 1 undef
ready
1NODE_003_LUN01 20:0:0:1 sdf 8:80 1 undef
ready
1NODE_001_LUN01 21:0:0:1 sdg 8:96 1 undef
ready
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:96 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:96 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:80 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:80 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:48 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:48 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:0 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:0 A 0
Jan 26 12:46:28 | params = 0 0 1 1 round-robin 0 1 1 8:64 1
Jan 26 12:46:28 | status = 2 0 0 0 1 1 A 0 1 0 8:64 A 0
Jan 26 12:46:28 | Found matching wwid
[1ATA_WDC_WD3200AAJS-60Z0A0_WD-WMAV2HM46197] in bindings file. Setting
alias to mpatha
Jan 26 12:46:28 | sdb: ownership set to mpatha
Jan 26 12:46:28 | sdb: not found in pathvec
Jan 26 12:46:28 | sdb: mask = 0xc
Jan 26 12:46:28 | sdb: get_state
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | directio: starting new request
Jan 26 12:46:28 | directio: io finished 4096/0
Jan 26 12:46:28 | sdb: state = 3
Jan 26 12:46:28 | sdb: state = running
Jan 26 12:46:28 | sdb: const prio = 1
Jan 26 12:46:28 | mpatha: pgfailover = -1 (internal default)
Jan 26 12:46:28 | mpatha: pgpolicy = failover (internal default)
Jan 26 12:46:28 | mpatha: selector = round-robin 0 (internal default)
Jan 26 12:46:28 | mpatha: features = 0 (internal default)
Jan 26 12:46:28 | mpatha: hwhandler = 0 (internal default)
Jan 26 12:46:28 | mpatha: rr_weight = 1 (internal default)
Jan 26 12:46:28 | mpatha: minio = 1 rq (config file default)
Jan 26 12:46:28 | mpatha: no_path_retry = -1 (config file default)
Jan 26 12:46:28 | pg_timeout = NONE (internal default)
Jan 26 12:46:28 | mpatha: fast_io_fail_tmo = 5 (config file default)
Jan 26 12:46:28 | mpatha: dev_loss_tmo = 30 (config file default)
Jan 26 12:46:28 | mpatha: retain_attached_hw_handler = 1 (config file
default)
Jan 26 12:46:28 | failed to find rport_id for target10:0:0
Jan 26 12:46:28 | mpatha: set ACT_CREATE (map does not exist)
Jan 26 12:46:28 | mpatha: domap (0) failure for create/reload map
Jan 26 12:46:28 | mpatha: ignoring map
**********************************************************************************
[root@node002 shim]# iscsiadm -m session -o show
tcp: [6] 192.168.1.12:3260,1 pl.mycomp.shimi:node002.target0
tcp: [7] 192.168.1.11:3260,1 pl.mycomp.shimi:manager.target0
tcp: [8] 192.168.1.14:3260,1 pl.mycomp.shimi:node003.target0
tcp: [9] 192.168.1.13:3260,1 pl.mycomp.shimi:node001.target0
**********************************************************************************
[root@node002 shim]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
**********************************************************************************
[root@node002 shim]# sestatus
SELinux status: disabled
9 years, 11 months
ovirt 3.5 self-hosted engine and ovirtmgmt not syncronised
by Kostyrev Aleksandr
Hello!
I'm testing ovirt-3.5 with self-hosted engine.
Today I've noticed that in "Setup Host Networks" screen of every host I
have ovirtmgmt "Not syncronised".
If I click "Save network" no errors occur but when I issue "ifconfig" on
that node I see no ovirtmgmt interface - it gets removed.
but
virsh -r net-list
Name State Autostart Persistent
--------------------------------------------------
;vdsmdummy; active no no
vdsm-ovirtmgmt active yes yes
vdsm-VMs active yes yes
--
С уважением,
Костырев Александр,
системный администратор,
www.tutu.ru
skype a.kostyrev
Тел. +7(925) 237-7668
9 years, 11 months
Host remains Non-Responsive after reboot
by Rob Abshear
------sinikael-?=_1-14220409622600.038446111837401986
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
I am running oVirt Engine Version 3.5.0.1-1.el6. I have 4 hosts in the =
cluster.
Each host has a drac5 and it is configured and working. I am =
trying to simulate
a node failure. I am running one HA VM on one of the =
hosts for testing. I
simulate the failure by powering off the host with the=
VM running.
Here is what is happening. * Host is powered off
* ~4 minutes pass and the host is recognized as not responding
* Automatic fence runs and the VM migrates.Another host in the node is =
chosen as a proxy to execute Status command on
the host.
* Same host is chosen as proxy to execute Start command on the host.
* Same host is chosen as proxy to execute Status command on the host.
* The host DOES physically start.
* The host never shows status of UP.
* I select =E2=80=9Cconfirm host has been rebooted=E2=80=9D and I see a =
manual fence start.
* Host stays non-responsive.
* I put the host in =
maintenance and then activate it.
* Host still non-responsive
* I put the host in maintenance and do a reinstall
* Reinstall finishes and host becomes UP
So, everything seems to go fine =
with the HA functionality, but the host never
recovers without being =
reinstalled. Please let me know which logs you need to
look at to help me out with this.
Thanks
Sent withMixmax [https://mixmax.=
com/r/S6cJAfQTLnw8QGtnD]
------sinikael-?=_1-14220409622600.038446111837401986
Content-Type: text/html; format=flowed
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.=
w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns=3D"http://www.=
w3.org/1999/xhtml" xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office">
<head>
<meta name=3D"viewport" content=3D"width=3Ddevice-width, =
initial-scale=3D1.0">
=20
=20
<!--[if gte mso 9]>
<xml>
<o:OfficeDocumentSettings>
<o:AllowPNG/>
<o:PixelsPerInch>96</o:PixelsPerInch>
</o:OfficeDocumentSettin=
gs>
</xml>
<![endif]-->
=20
=20
<style =
type=3D"text/css">table {border-collapse:collapse;}* =
a:hover{cursor:pointer;}img {width:auto;}* [lang~=3D"preview-card"],.=
preview-card {display:block;margin:0;width:100%;font-size:0;}* =
[lang~=3D"interactive-card"],.interactive-card {display:none !important;}* =
[lang~=3D"brand-pinterest"] {width:280px !important;}form {border:0 !=
important;margin:0 !important;padding:0 0 8px 0 !important;font-size:0;}for=
m >div {display:inline-block;width:50%;}form td {padding-right:6px;font-fam=
ily:'proxima-nova','Avenir Next','Segoe UI','Calibri','Helvetica Neue',=
Helvetica,Arial,sans-serif;}fieldset {border:1px solid #ccd !=
important;padding:6px 5px 5px 0 !important;border-radius:4px !=
important;padding-right:20px;margin:0;width:auto;}input =
{background:none;outline:none !important;min-height:25px;padding:0 =
10px;border:none;margin:0;width:100%;box-sizing:border-box;}* =
[lang~=3D"column-wrapper-first"],div.column-wrapper-first =
{display:inline-block;width:30%;vertical-align:top;padding:8px 16px 4px 8px=
!important;}* [lang~=3D"column-wrapper-second"],div.column-wrapper-second =
{display:inline-block;width:60%;vertical-align:top;padding:4px 0 4px 0;}* =
[lang~=3D"column-wrapper-only"],div.column-wrapper-only {padding:8px 8px =
4px 8px !important;}</style>
</head>
<body leftmargin=3D"0" =
topmargin=3D"0" marginwidth=3D"0" marginheight=3D"0" yahoo=3D"fix" =
style=3D"word-wrap:normal; word-break:break-word;">
<style></style>
=20
<!--[if mso]>
<style>a {font-family:'Segoe UI','Calibri',=
Arial,sans-serif !important;}p {line-height:24px;margin-left:3px !=
important;}h1,h2,h3 {padding-left:3px;}img {border:none !=
important;-ms-interpolation-mode:bicubic;}.container {width:600px !=
important;}.p {line-height:22px;mso-line-height-rule:exactly !important;}td=
{mso-line-height-rule:exactly !important;}table.mso-card-outer =
{width:580px !important;margin-bottom:15px !important;}table.border-outer =
{width:580px !important;margin-bottom:15px !important;}table.=
mso-card-outer-pinterest {width:274px !important;margin-bottom:15px !=
important;}td.mso-card-inner table {border-collapse:collapse !=
important;mso-table-lspace:0pt;mso-table-rspace:0pt;vertical-align:top;}.=
border-outer,.border-middle,.border-inner {border:none !important;}.=
mso-border-outer,.mso-border-middle,.mso-border-inner {padding:1px;}.=
mso-border-outer {background-color:rgb(245,255,255);}.mso-border-middle =
{background-color:rgb(223,246,255);}.mso-border-inner =
{background-color:rgb(153,176,225);}.preview-card {margin-bottom:0 !=
important;padding:0 !important;}.column-wrapper-first {margin:0;}.=
column-only {padding:8px 8px 4px 8px;}.column-first {padding:8px 16px 8px =
8px;}.mso-column-wrapper-only {width:100% !important;}.outlook-only =
{display:block !important;max-height:none !important;overflow:visible !=
important;}.outlook-com-only {display:none;}</style>
<![endif]-->
=20
=20
<style>.column-wrapper {vertical-align:top;}a =
{word-wrap:normal;word-break:break-word;}@media only screen and =
(max-width:600px) {.container[not-yahoo] {-webkit-text-size-adjust:none !=
important;}.container[not-yahoo] {width:100% !important;min-width:100% !=
important;}.container[not-yahoo] [class=3D"border-outer"] {width:100% !=
important;}.container[not-yahoo] [class=3D"palm-one-whole"] {width:100% !=
important;min-width:100% !important;}.container[not-yahoo] =
td[class=3D"palm-one-whole"] {display:inline-block !important;}.=
container[not-yahoo] .message-wrapper {padding:2.5%;}.container[not-yahoo] =
td[class=3D"hostname"] {padding-top:3px !important;}.container[not-yahoo] =
div.column-wrapper-first {display:block;padding:inherit !=
important;width:100% !important;}.container[not-yahoo] div.=
column-wrapper-second {display:block;padding:inherit !important;width:100% =
!important;}.container[not-yahoo] div.column-wrapper-only {padding:0 !=
important;}}@media only screen and (min-device-width :320px) and =
(max-device-width :568px),only screen and (min-device-width :768px) and =
(max-device-width :1024px),only screen and (max-device-width:640px),only =
screen and (max-device-width:667px),only screen and =
(max-width:480px){table[class=3D"container"] {width:100% !=
important;min-width:100% !important;}.container[not-yahoo] .p,.=
container[not-yahoo] ol,.container[not-yahoo] ul {font-size:17px;}audio =
{margin-bottom:10px;}.container[not-yahoo] .message-wrapper {padding:0;}.=
container[not-yahoo] [lang~=3D"brand-pinterest"] {width:100% !=
important;}}@media only screen and (min-width:601px) {.container[not-yahoo]=
table[class=3D"container"] {width:600px !important;}.container[not-yahoo] =
.message-wrapper {padding:15px 25px;}}@media only screen and =
(min-device-width :320px) and (max-device-width :568px),only screen and =
(min-device-width :768px) and (max-device-width :1024px),only screen and =
(min-device-width :1224px) {.container[not-yahoo] {}audio::-webkit-media-c=
ontrols-panel {-webkit-appearance:none !important;background-color:#ff571b;=
border-radius:2px;}audio::-webkit-media-controls-rewind-button =
{display:none !important;}.container[not-yahoo] .apple-only[style] =
{display:block !important;max-height:none !important;line-height:normal !=
important;overflow:visible !important;height:auto !important;width:100% !=
important;position:relative !important;}.ExternalClass .ecxapple-only =
{display:none !important;}.container[not-yahoo] .no-apple {display:none !=
important;}.container[not-yahoo] .no-apple {display:block;}.=
container[not-yahoo] form {width:100%;font-size:inherit;padding:0 0 8px 0!=
important;}.container[not-yahoo] form td {}.container[not-yahoo] form =
select {}.container[not-yahoo] form fieldset {padding:0 !=
important;height:45px;}.container[not-yahoo] form input =
{height:43px;padding-left:4px !important;}.container[not-yahoo] form =
button:hover {cursor:pointer;}.container[not-yahoo] .form-row =
{font-size:0;}.container[not-yahoo] .form-row >.form-column =
{display:inline-block;width:50%;}.container[not-yahoo] .quality fieldset =
{width:40% !important;}.container[not-yahoo] .zip fieldset {width:40% !=
important;}}</style>
=20
<style>.ExternalClass p,.ExternalClass font=
,.ExternalClass td {margin:0 !important;}.ExternalClass {width:100%;}.=
ExternalClass .ecxcolumn-wrapper-second {width:60% !important;}.=
ExternalClass .ecxcolumn-wrapper-first {padding-top:6px !=
important;padding-left:6px !important;}.ExternalClass .ecxlabels =
{display:none !important;}.ExternalClass .ecxarrow {display:none !=
important;}.ExternalClass .h1 {padding-bottom:5px;}.ExternalClass .h2 =
{padding-bottom:5px;}.ExternalClass .h3 {padding-bottom:5px;}.ExternalClass=
.outlook-com-hidden {display:none !important;}.ExternalClass .=
outlook-com-button {display:block;}.ExternalClass .outlook-com-only =
{display:block !important;max-height:none !important;line-height:normal !=
important;overflow:visible !important;height:auto !important;width:100% !=
important;position:relative !important;}.ExternalClass .outlook-only =
{display:block !important;max-height:none !important;overflow:visible !=
important;}.ExternalClass [lang=3D"brand-pinterest"] {width:280px !=
important;}.ExternalClass cite >div + div {padding:0 0 4px 0;}.=
ExternalClass button {height:auto;}</style>
<table class=3D"container"=
lang=3D"container" not-yahoo=3D"fix" border=3D"0" cellpadding=3D"0" =
cellspacing=3D"0" valign=3D"top" style=3D"max-width: 600px;">
<tr>
<td valign=3D"top" class=3D"message-wrapper webfont-sans" =
style=3D"font-size: 14px; line-height: 1.5; color: #333; =
font-family:'Segoe UI', 'Helvetica Neue', Helvetica, 'Calibri', Arial, =
sans-serif;
">
<div class=3D"p" style=3D"line-height: 1.5;">I am =
running oVirt Engine Version 3.5.0.1-1.el6. I have 4 hosts in the cluster. =
Each host has a drac5 and it is configured and working. I am trying to =
simulate a node failure. I am running one HA VM on one of the hosts for =
testing. I simulate the failure by powering off the host with the VM =
running.</div><div class=3D"p" style=3D"line-height: 1.5;"><br></div><div =
class=3D"p" style=3D"line-height: 1.5;">Here is what is happening.=
</div><ul><li>Host is powered off</li><li><span style=3D"white-space: =
pre-wrap; line-height: 1.5;">~4 minutes pass and the host is recognized as =
not responding</span></li><li><span style=3D"white-space: pre-wrap; =
line-height: 1.5;">Automatic fence runs and the VM migrates.</span><span =
style=3D"white-space: pre-wrap; line-height: 1.5;">Another host in the node=
is chosen as a proxy to execute Status command on the host.=
</span></li><li><span style=3D"white-space: pre-wrap; line-height: 1.=
5;">Same host is chosen as proxy to execute Start command on the host.=
</span></li><li><span style=3D"white-space: pre-wrap; line-height: 1.=
5;">Same host is chosen as proxy to execute Status command on the host.=
</span></li><li><span style=3D"white-space: pre-wrap; line-height: 1.=
5;">The host DOES physically start.</span></li><li><span =
style=3D"white-space: pre-wrap; line-height: 1.5;">The host never shows =
status of UP.</span></li><li><span style=3D"white-space: pre-wrap; =
line-height: 1.5;">I select “confirm host has been rebooted” =
and I see a manual fence start.</span></li><li>Host stays non-responsive.=
</li><li><span style=3D"white-space: pre-wrap; line-height: 1.5;">I put the=
host in maintenance and then activate it.</span></li><li><span =
style=3D"white-space: pre-wrap; line-height: 1.5;">Host still =
non-responsive</span></li><li><span style=3D"white-space: pre-wrap; =
line-height: 1.5;">I put the host in maintenance and do a =
reinstall</span></li><li>Reinstall finishes and host becomes =
UP</li></ul><div class=3D"p" style=3D"line-height: 1.5;">So, everything =
seems to go fine with the HA functionality, but the host never recovers =
without being reinstalled. Please let me know which logs you need to look =
at to help me out with this. </div><div class=3D"p" style=3D"line-height: 1=
.5;"><br></div><div class=3D"p" style=3D"line-height: 1.=
5;">Thanks</div><div class=3D"p" style=3D"line-height: 1.5;"><br></div><img=
src=3D"https://app.mixmax.com/api/track?id=3DHpxhNDpPcWWiXBhCL&re=3DIy=
Zy9mL0JXa29GQzJXZzVnI&rn=3D">
<br>
<div =
class=3D"signature" style=3D"font-size: 14px; border-top:1px solid #eef; =
font-weight:500;">
<table border=3D"0" cellpadding=3D"0" =
cellspacing=3D"0" valign=3D"top" style=3D"border-collapse:collapse;">
<tr>
<td class=3D"signature-text" =
style=3D"padding-top:15px;">
<span =
style=3D"display:block; font-family:'proxima-nova', 'Avenir Next', 'Segoe =
UI', 'Calibri', 'Helvetica Neue', Helvetica, Arial, sans-serif;
">
Sent with <b style=3D"font-family:'proxima=
-nova', 'Avenir Next', 'Segoe UI', 'Calibri', 'Helvetica Neue', Helvetica, =
Arial, sans-serif;
;"><u><a style=3D"text-decoration:underline; =
color:#0d52cb;" href=3D"https://mixmax.com/r/S6cJAfQTLnw8QGtnD" =
target=3D"_blank">Mixmax</a></u></b>
</span>
</td>
</tr>
</table></div>
</td>
</tr>
</table>
</body>
</html>
------sinikael-?=_1-14220409622600.038446111837401986--
9 years, 11 months
Can not add data domain via nfs
by Xiaoqiang Zhou
Hi ALL:
have some wrong when add data domain into a data center.
I can not add a data domain from web panel.
error log on vdsm server:
Jan 27 14:49:12 localhost rpc.mountd[3020]: authenticated mount
request from 192.168.60.38:730 for /opt/ovirt-node-nfs/data
(/opt/ovirt-node-nfs/data)
Jan 27 14:49:12 localhost kernel: device-mapper: table: 253:5: multipath:
error getting device
Jan 27 14:49:12 localhost kernel: device-mapper: ioctl: error adding target
to table
Jan 27 14:49:12 localhost multipathd: dm-5: remove map (uevent)
Jan 27 14:49:12 localhost multipathd: dm-5: remove map (uevent)
Jan 27 14:49:13 localhost avahi-daemon[1223]: Received response from host
192.168.61.145 with invalid source port 40237 on interface 'ovirtmgmt.0'
Jan 27 14:49:14 localhost kernel: device-mapper: table: 253:5: multipath:
error getting device
Jan 27 14:49:14 localhost kernel: device-mapper: ioctl: error adding target
to table
Jan 27 14:49:14 localhost multipathd: dm-5: remove map (uevent)
Jan 27 14:49:14 localhost multipathd: dm-5: remove map (uevent)
[root@localhost etc]# cat multipath.conf
# RHEV REVISION 1.1
defaults {
polling_interval 5
getuid_callout "/usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
}
devices {
device {
vendor "HITACHI"
product "DF.*"
getuid_callout "/usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
}
device {
vendor "COMPELNT"
product "Compellent Vol"
no_path_retry fail
}
device {
# multipath.conf.default
vendor "DGC"
product ".*"
product_blacklist "LUNZ"
path_grouping_policy "group_by_prio"
path_checker "emc_clariion"
hardware_handler "1 emc"
prio "emc"
failback immediate
rr_weight "uniform"
# vdsm required configuration
getuid_callout "/usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
features "0"
no_path_retry fail
}
someone can tell me how to fix this issue, thanks
9 years, 11 months
hosted-engine setup ovirtmgmt bridge
by Mikola Rose
--_000_2CD6B53E188C4D1BB619971B98D0523Dpowersoftcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi there again list users;
On a hosted-engine --deploy on a machine that has 2 network cards
em1 192.168.0.178 General Network
em2 192.168.1.151 Net that NFS server is on, no dns no gateway
which one would I set as ovirtmgmt bridge
"Please indicate a nic to set ovirtmgmt bridge on: (em1, em2) [em1]"
Mik
--_000_2CD6B53E188C4D1BB619971B98D0523Dpowersoftcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <70388CF1AF7F3240BDE3E1AAA9E00B94(a)power-soft.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-lin=
e-break: after-white-space;" class=3D"">
Hi there again list users;
<div class=3D""><br class=3D"">
</div>
<div class=3D""><br class=3D"">
</div>
<div class=3D"">On a hosted-engine --deploy on a machine that has 2 network=
cards</div>
<div class=3D"">em1 192.168.0.178 General Network </div>
<div class=3D"">em2 192.168.1.151 Net that NFS server is on, no=
dns no gateway</div>
<div class=3D""><br class=3D"">
</div>
<div class=3D"">which one would I set as ovirtmgmt bridge</div>
<div class=3D""><br class=3D"">
</div>
<div class=3D"">"Please indicate a nic to set ovirtmgmt bridge on: (em=
1, em2) [em1]"</div>
<div class=3D""><br class=3D"">
</div>
<div class=3D""><br class=3D"">
</div>
<div class=3D""><br class=3D"">
</div>
<div class=3D"">
<div apple-content-edited=3D"true" class=3D"">
<div class=3D"">
<table class=3D"MsoNormalTable" border=3D"0" cellspacing=3D"0" cellpadding=
=3D"0" style=3D"margin-left: 1.65pt; border-collapse: collapse;">
<tbody class=3D"">
<tr style=3D"height: 26.25pt;" class=3D"">
<td width=3D"421" valign=3D"top" style=3D"orphans: 2; text-align: -webkit-a=
uto; widows: 2; width: 315.75pt; padding: 0in 5.4pt; height: 26.25pt;" clas=
s=3D"">
<div style=3D"orphans: auto; widows: auto;" class=3D""><span style=3D"text-=
align: -webkit-auto; font-size: 12px;" class=3D"">Mik</span></div>
<br class=3D"">
</td>
<td width=3D"69" valign=3D"top" style=3D"orphans: 2; text-align: -webkit-au=
to; widows: 2; width: 51.75pt; padding: 0in 5.4pt; height: 26.25pt;" class=
=3D"">
<div style=3D"margin: 0in 0in 0.0001pt; font-size: 11pt; font-family: Calib=
ri, sans-serif;" class=3D"">
<span style=3D"color: rgb(31, 73, 125);" class=3D""><br class=3D"Apple-inte=
rchange-newline" style=3D"text-align: -webkit-auto;">
</span></div>
</td>
</tr>
</tbody>
</table>
<div class=3D""><br class=3D"">
</div>
</div>
<div class=3D""><br class=3D"">
</div>
<br class=3D"Apple-interchange-newline">
</div>
<br class=3D"">
</div>
</body>
</html>
--_000_2CD6B53E188C4D1BB619971B98D0523Dpowersoftcom_--
9 years, 11 months