[ANN] oVirt 4.1.1 Async release candidate
by Sandro Bonazzola
Hi,
following oVirt 4.1.1 GA the oVirt team identified some bugs worth to be
addressed out of the usual release cycle.
For this reason an async release including these fixes is under preparation.
An update to the following packages have been pushed to the pre-release
repository for testing.
- cockpit-ovirt-0.10.7-0.0.16
- imgbased-0.9.20
- ovirt-engine-4.1.1.8
- ovirt-engine-metrics-1.0.2
- ovirt-release41-4.1.1.1
- ovirt-web-ui-0.1.2-4
- vdsm-jsonrpc-java-1.3.10
- vdsm v4.19.10.1
In order to test it, you'll need pre-release repository enabled:
yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release41-pre.rpm
An update to release notes including bugs related to above packages and
updates applied in bugzilla documentation texts has been pushed
tohttps://github.com/oVirt/ovirt-site/pull/888 for review.
On behalf of the oVirt team,
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years, 8 months
rename storage domain
by Gianluca Cecchi
Hello,
I'm in 4.1.1 and imported a storage domain.
In import window there is option to put name, but actually afterwards the
name is the same as the source one...
Is this expected?
Is there any way to change name of storage domain?
thanks,
Gianluca
7 years, 8 months
Moving from fc based to iSCSI based
by Gianluca Cecchi
Hello,
I have to move some VMs from FCP based storage domain to iSCSI based
storage domain.
Can I do it offline or online keeping in mind that:
Host1, Host2 have access only to the FCP SD
Host3, Host4 have access only to the iSCSI SD
They are all managed by the same engine in the same DC but in different
cluster
I suppose it happens through network, correct?
Thanks in advance,
Gianluca
7 years, 8 months
Question about firewall on hypervisor
by Gianluca Cecchi
Suppose I want to disable firewall at already installed hypervisor side (eg
because I want to setup OVN and currently if I remember correctly it needs
to be disabled for that), can I simply disable the related services through
systemctl stop iptables
systemctl disable iptables
systemctl stop firewalld
systemctl disable firewalld
Or is anything else to do at hypervisor and/or engine side?
I don't see anything in web admin gui editing the host, while when I add
the host there is the checkbox "Automatically configure host firewall"....
Thanks,
Gianluca
7 years, 8 months
Question about import domain and templates
by Gianluca Cecchi
Hello,
in environment A I have 2 storage domains ST1 and ST2
I have to detach ST2 and import it into another environment B.
On ST2 I have some VMs whose template disk is currently on ST1...
Does this constitute a problem at destination import?
I see that for disks of templates I can only copy and not move them...
Can I copy the template disk from ST1 to ST2 befor detaching, so that it
will be imported at destination too?
What should I do at destination, then?
Thanks,
Gianluca
7 years, 8 months
Update of ovirt-guest-agent for Debian Jessie
by Christophe TREFOIS
--Apple-Mail=_1C9D631E-5E5D-46F7-879C-F91E31E19962
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
Hi,
Could you update ovirt-guest-agent as latest is 1.0.13 but on Debain =
mirror it is 1.0.10?
Thanks,
Christophe=
--Apple-Mail=_1C9D631E-5E5D-46F7-879C-F91E31E19962
Content-Disposition: attachment; filename="smime.p7s"
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIKUjCCBQAw
ggPooAMCAQICEANL7hcft+EGNy/UckJAvSowDQYJKoZIhvcNAQELBQAwZTELMAkGA1UEBhMCVVMx
FTATBgNVBAoTDERpZ2lDZXJ0IEluYzEZMBcGA1UECxMQd3d3LmRpZ2ljZXJ0LmNvbTEkMCIGA1UE
AxMbRGlnaUNlcnQgQXNzdXJlZCBJRCBSb290IENBMB4XDTE0MTExODEyMDAwMFoXDTI0MTExODEy
MDAwMFowaTELMAkGA1UEBhMCTkwxFjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQBgNVBAcTCUFt
c3RlcmRhbTEPMA0GA1UEChMGVEVSRU5BMR0wGwYDVQQDExRURVJFTkEgUGVyc29uYWwgQ0EgMzCC
ASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMalux9VgvSlNHIUcmWbX9XqoGjYm4KJT1Ni
nHuMzg7a1DIOrxuMy6br2+DrAzSUwYd+UyCoHvkbF4p9AWYSV2heLZwuKyGoVS9TZwl4zdJpGwuT
YGy4KBPOH02xiCatRfrQoc6rt5gpVnFs8VYkXacw4G4xc27aoFdMZIt4tcec28bevtMvMaDWgDm1
4DFAToSjzJJJ4hds/vBkHvq6vCWHnU8huslmOQON1BUQuHs6VM/VY7pP8d/oGwR94VymhrV74xb+
XDRV61M88dDXkdgpg/ZZiiEfHMOIL5BTATtRlUGWn1xcinAe4PNKJtzlEbfPSf+PZJALHQOTkUwB
QSsCAwEAAaOCAaYwggGiMBIGA1UdEwEB/wQIMAYBAf8CAQAwDgYDVR0PAQH/BAQDAgGGMHkGCCsG
AQUFBwEBBG0wazAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuZGlnaWNlcnQuY29tMEMGCCsGAQUF
BzAChjdodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5jb20vRGlnaUNlcnRBc3N1cmVkSURSb290Q0Eu
Y3J0MIGBBgNVHR8EejB4MDqgOKA2hjRodHRwOi8vY3JsMy5kaWdpY2VydC5jb20vRGlnaUNlcnRB
c3N1cmVkSURSb290Q0EuY3JsMDqgOKA2hjRodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vRGlnaUNl
cnRBc3N1cmVkSURSb290Q0EuY3JsMD0GA1UdIAQ2MDQwMgYEVR0gADAqMCgGCCsGAQUFBwIBFhxo
dHRwczovL3d3dy5kaWdpY2VydC5jb20vQ1BTMB0GA1UdDgQWBBTwIelJd3Ofha4YO+hScBQG7ULu
yjAfBgNVHSMEGDAWgBRF66Kv9JLLgjEtUYunpyGd823IDzANBgkqhkiG9w0BAQsFAAOCAQEAOsIb
K/5j3rthtLlHqPx8BbEmICJRm8g9+fXFWB6wXqOynAJBjzEaf8wbTh34mH3sjwj9d/Hb0kQQWrat
CbDg/GI71GINnvGgPE13iutENHuOu04pA/wn6j5qHS9I4/raGcP/M9Ls1Ey0jJVLYbbvsoAvl8hH
raeCv3zh+YmCEO/RuyMYCZ67cQ/TGFSwMlPrUigAagoGfEE8B6DdZex1Kbgaftu3K9qR7IW1h5uv
YqY4FJrbw9GoZIO6vnppmEmgzZ4zNzqmua8bDC0DNGxzm3Y6wpG4dZXo7dqc/YrbuMZlHZuLVVfU
1BS8hulU1O7COb5dxm2xr8AAkX0oaQSx4DCCBUowggQyoAMCAQICEAbOAJ+Ec9btisckDi20mtow
DQYJKoZIhvcNAQELBQAwaTELMAkGA1UEBhMCTkwxFjAUBgNVBAgTDU5vb3JkLUhvbGxhbmQxEjAQ
BgNVBAcTCUFtc3RlcmRhbTEPMA0GA1UEChMGVEVSRU5BMR0wGwYDVQQDExRURVJFTkEgUGVyc29u
YWwgQ0EgMzAeFw0xNzAzMTAwMDAwMDBaFw0xODAzMTAxMjAwMDBaMH0xCzAJBgNVBAYTAkxVMRMw
EQYDVQQIEwpMdXhlbWJvdXJnMRkwFwYDVQQHExBFc2NoLXN1ci1BbHpldHRlMSEwHwYDVQQKExhV
bml2ZXJzaXR5IG9mIEx1eGVtYm91cmcxGzAZBgNVBAMTEkNocmlzdG9waGUgVFJFRk9JUzCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANMliC5A438/QPwe9tp30XBB7iJzjF+Yxl/HiBzO
rMOdNwEVbrGdtAuKaPY61yhv8dMzDYAHmmY7aBlFw/5ynohdLhRNNZ64d08lXxI4UCFnOtj+yJE7
ScvsvyOiORa+bdnRTSnSE2JDcB3Vlk2dUXIQ86+pc0J6zPBeCDTXUvqw2y7znrXm1reksJl8fMUL
R+LWX12hN6z2MRKgDZL3CswOIzocl+bmBnYqUnU2iDcv31QOKy6AGqGdV2t+1bAbJgZEqgkerSsf
YpHeezMej2vH5E6/IlwnHVnG7c54bGD1AumloDdEjCysqdlcEq/dc7Bb9qFrjg/Rby5iGxUtbhUC
AwEAAaOCAdgwggHUMB8GA1UdIwQYMBaAFPAh6Ul3c5+Frhg76FJwFAbtQu7KMB0GA1UdDgQWBBTI
hivrWIlnKOLxQ9wYwCGuiJFoezAMBgNVHRMBAf8EAjAAMCQGA1UdEQQdMBuBGWNocmlzdG9waGUu
dHJlZm9pc0B1bmkubHUwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEF
BQcDBDBDBgNVHSAEPDA6MDgGCmCGSAGG/WwEAQIwKjAoBggrBgEFBQcCARYcaHR0cHM6Ly93d3cu
ZGlnaWNlcnQuY29tL0NQUzB1BgNVHR8EbjBsMDSgMqAwhi5odHRwOi8vY3JsMy5kaWdpY2VydC5j
b20vVEVSRU5BUGVyc29uYWxDQTMuY3JsMDSgMqAwhi5odHRwOi8vY3JsNC5kaWdpY2VydC5jb20v
VEVSRU5BUGVyc29uYWxDQTMuY3JsMHMGCCsGAQUFBwEBBGcwZTAkBggrBgEFBQcwAYYYaHR0cDov
L29jc3AuZGlnaWNlcnQuY29tMD0GCCsGAQUFBzAChjFodHRwOi8vY2FjZXJ0cy5kaWdpY2VydC5j
b20vVEVSRU5BUGVyc29uYWxDQTMuY3J0MA0GCSqGSIb3DQEBCwUAA4IBAQC59zmhjgRb56qtSBn6
a3cA8FGq1pQyOmNRZtCHlMN/LuAdHBxteXrbVzPDrGl27lHVvgoAcFPxo093R3Z0qNjE/qOSro8e
i5PfQBEGbRwbWi6cHDOHT9y+OIq23wQdlnhV6gcdirfc5YXSD/0U573DDzW+MfloIRVA03EsXb+v
nZEgp6ajNtYL/cYMch5oQa3s2H27K0WVk0l+R38ChSJazkpchWZb7TvH+h2LHDZLRlvmuFHNACWJ
zSusO3o/N95pUnUS4uwnAmavXN3ZZWVqlTX5qyQin68dYZPkQJY0agX+T27Y+VFs/llOUmNw6X2m
e7G8PVBBx6cxvPaJ9tKMMYIDJTCCAyECAQEwfTBpMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9v
cmQtSG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExHTAbBgNVBAMT
FFRFUkVOQSBQZXJzb25hbCBDQSAzAhAGzgCfhHPW7YrHJA4ttJraMAkGBSsOAwIaBQCgggF9MBgG
CSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE3MDQwNDE1MTg1NFowIwYJ
KoZIhvcNAQkEMRYEFNpwjCgjvyHvU2g2GmL2kOQ4MOULMIGMBgkrBgEEAYI3EAQxfzB9MGkxCzAJ
BgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN
BgNVBAoTBlRFUkVOQTEdMBsGA1UEAxMUVEVSRU5BIFBlcnNvbmFsIENBIDMCEAbOAJ+Ec9btisck
Di20mtowgY4GCyqGSIb3DQEJEAILMX+gfTBpMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQt
SG9sbGFuZDESMBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExHTAbBgNVBAMTFFRF
UkVOQSBQZXJzb25hbCBDQSAzAhAGzgCfhHPW7YrHJA4ttJraMA0GCSqGSIb3DQEBAQUABIIBAEVF
6lrKjUHmahOb7aw5ceki/XTMgV/86cvPXQSGVdyakxdRer89jToNS1QV/HRPP8yaEPcHbqDGKMXl
ajOb5ThwSyTDAuo9fJQXDYW7YEIltdUBALTVo6HQOz6vSuk1FdpWaSRwZPrF1BLQ+JO4QWFqx/oV
cyHDUSKshYMdqcgfo8aTtlUx814WHLi7G1tKgjA5JtIkD2L92et5Vu89YPiWy456BCqHomNH/lWQ
+46h615VEwYRIM5J0GR7GYQcGMzbrmoIUZfzOZGkBz/Hy+t9n1hi2C4A9pj2rygVjkY4IqmV1y+o
yymh8gIFDcVDNh+DPRQQunM9iEQuYhyjT78AAAAAAAA=
--Apple-Mail=_1C9D631E-5E5D-46F7-879C-F91E31E19962--
7 years, 8 months
info on performance of cloning a snapshot
by Gianluca Cecchi
Hello,
I'm on 4.1.1 environment.
Storage domain is iSCSI (with multipath) and able to write at about 200
MBytes/s when making a test inside a VM (100MB x 2)
While cloning a snapshot it seems the performace is quite lower...
It begins at about 12MB/s (6 x 2) then increases at about 30MB/s (15 x 2)
with maximum of 50MB/s (25 x 2), all balanced across the two iSCSI network
adapters
The related process on hypervisor
vdsm 2772 3701 4 15:47 ? 00:00:08 /usr/bin/qemu-img convert
-p -t none -T none -f qcow2
/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/94795322-92a7-4904-80c1-83d6b629d38a/images/073519dc-440d-418d-9e0c-a7ac4812ea82/845a9008-8ece-4cf2-9828-f286640b5f0e
-O qcow2 -o compat=1.1
/rhev/data-center/mnt/blockSD/94795322-92a7-4904-80c1-83d6b629d38a/images/54c40b95-c0e5-42db-a6ba-c00f8d337fdc/5987e84f-28be-46c4-bc3a-688d5f7fd025
Not a big problem, only to know: is there any limiting/tunable factor in
any vdsm/engine parameter?
Or is it related to the command "qemu-img convert" itself or anything else?
Thanks,
Gianluca
7 years, 8 months
general storage exception ... not using lvmetad
by Gianluca Cecchi
Hello,
on my 4.1 environment sometimes I get this kind of message, typically when
I create a snapshot.
Please note that the snapshot is correctly created: I get the event between
the snapshot creation initiated and completed events
VDSM ovmsrv05 command TeardownImageVDS failed: Cannot deactivate Logical
Volume: ('General Storage Exception: ("5 [] [\' WARNING: Not using lvmetad
because config setting use_lvmetad=0.\', \' WARNING: To avoid corruption,
rescan devices to make changes visible (pvscan --cache).\', \' Logical
volume
922b5269-ab56-4c4d-838f-49d33427e2ab/79350ec5-eea5-458b-a3ee-ba394d2cda27
in use.\', \' Logical volume
922b5269-ab56-4c4d-838f-49d33427e2ab/3590f38b-1e3b-4170-a901-801ee5d21d59
in
use.\']\\n922b5269-ab56-4c4d-838f-49d33427e2ab/[\'3590f38b-1e3b-4170-a901-801ee5d21d59\',
\'79350ec5-eea5-458b-a3ee-ba394d2cda27\']",)',)
What is the exact meaning and how can I crosscheck to solve the possible
anomaly...?
What about lvmetad reference inside the message? Is it an option that in
oVirt can be configured or not at all for now?
Thanks,
Gianluca
7 years, 8 months
VM has been paused due to storage I/O problem
by Gianluca Cecchi
Hello,
my test environment is composed by 2 old HP blades BL685c G1 (ovmsrv05 and
ovmsrv06) and they are connected in a SAN with FC-switches to an old IBM
DS4700 storage array.
Apart from being old, they seem all ok from an hw point of view.
I have configured oVirt 4.0.6 and an FCP storage domain.
The hosts are plain CentOS 7.3 servers fully updated.
It is not an hosted engine environment: the manager is a vm outside of the
cluster.
I have configured power mgmt on both and it works good.
I have at the moment only one VM for test and it is doing quite nothing.
Starting point: ovmsrv05 is in maintenance (since about 2 days) and the VM
is running on ovmsrv06.
I update qemu-kvm package on ovmsrv05 and then I restart it from web admin
gui:
Power Mgmt --> Restart
Sequence of events in pane and the problem in subject:
Jan 31, 2017 10:29:43 AM Host ovmsrv05 power management was verified
successfully.
Jan 31, 2017 10:29:43 AM Status of host ovmsrv05 was set to Up.
Jan 31, 2017 10:29:38 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:29:29 AM Activation of host ovmsrv05 initiated by
admin@internal-authz.
Jan 31, 2017 10:28:05 AM VM ol65 has recovered from paused back to up.
Jan 31, 2017 10:27:55 AM VM ol65 has been paused due to storage I/O problem.
Jan 31, 2017 10:27:55 AM VM ol65 has been paused.
Jan 31, 2017 10:25:52 AM Host ovmsrv05 was restarted by
admin@internal-authz.
Jan 31, 2017 10:25:52 AM Host ovmsrv05 was started by admin@internal-authz.
Jan 31, 2017 10:25:52 AM Power management start of Host ovmsrv05 succeeded.
Jan 31, 2017 10:25:50 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:37 AM Executing power management start on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:37 AM Power management start of Host ovmsrv05 initiated.
Jan 31, 2017 10:25:37 AM Auto fence for host ovmsrv05 was started.
Jan 31, 2017 10:25:37 AM All VMs' status on Non Responsive Host ovmsrv05
were changed to 'Down' by admin@internal-authz
Jan 31, 2017 10:25:36 AM Host ovmsrv05 was stopped by admin@internal-authz.
Jan 31, 2017 10:25:36 AM Power management stop of Host ovmsrv05 succeeded.
Jan 31, 2017 10:25:34 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:15 AM Executing power management stop on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Jan 31, 2017 10:25:15 AM Power management stop of Host ovmsrv05 initiated.
Jan 31, 2017 10:25:12 AM Executing power management status on Host ovmsrv05
using Proxy Host ovmsrv06 and Fence Agent ilo:10.4.192.212.
Watching the timestamps, the culprit seems the reboot time of ovmsrv05 that
detects some LUNs in owned state and other ones in unowned
Full messages of both hosts here:
https://drive.google.com/file/d/0BwoPbcrMv8mvekZQT1pjc0NMRlU/view?usp=sha...
and
https://drive.google.com/file/d/0BwoPbcrMv8mvcjBCYVdFZWdXTms/view?usp=sha...
At this time there are 4 LUNs globally seen by the two hosts but only 1 of
them is currently configured as the only storage domain in oVirt cluster.
[root@ovmsrv05 ~]# multipath -l | grep ^36
3600a0b8000299aa80000d08b55014119 dm-5 IBM ,1814 FAStT
3600a0b80002999020000cd3c5501458f dm-3 IBM ,1814 FAStT
3600a0b80002999020000ccf855011198 dm-2 IBM ,1814 FAStT
3600a0b8000299aa80000d08955014098 dm-4 IBM ,1814 FAStT
the configured one:
[root@ovmsrv05 ~]# multipath -l 3600a0b8000299aa80000d08b55014119
3600a0b8000299aa80000d08b55014119 dm-5 IBM ,1814 FAStT
size=4.0T features='0' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| |- 0:0:1:3 sdl 8:176 active undef running
| `- 2:0:1:3 sdp 8:240 active undef running
`-+- policy='service-time 0' prio=0 status=enabled
|- 0:0:0:3 sdd 8:48 active undef running
`- 2:0:0:3 sdi 8:128 active undef running
In mesages of booting node, arounf the problem registered by the storage:
[root@ovmsrv05 ~]# grep owned /var/log/messages
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:1: rdac: LUN 1 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:2: rdac: LUN 2 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:3: rdac: LUN 3 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:1: rdac: LUN 1 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:0:4: rdac: LUN 4 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:2: rdac: LUN 2 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:1: rdac: LUN 1 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:3: rdac: LUN 3 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:0:4: rdac: LUN 4 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:2: rdac: LUN 2 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:1:1: rdac: LUN 1 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:3: rdac: LUN 3 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:1:2: rdac: LUN 2 (RDAC) (unowned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 0:0:1:4: rdac: LUN 4 (RDAC) (owned)
Jan 31 10:27:38 ovmsrv05 kernel: scsi 2:0:1:3: rdac: LUN 3 (RDAC) (owned)
Jan 31 10:27:39 ovmsrv05 kernel: scsi 2:0:1:4: rdac: LUN 4 (RDAC) (owned)
I don't know exactly the meaning of owned/unowned in the output above..
Possibly it detects the 0:0:1:3 and 2:0:1:3 paths (those of the active
group) as "owned" and this could have created problems with the active node?
On active node strangely I don't loose all the paths, but the VM has been
paused anyway
[root@ovmsrv06 log]# grep "remaining active path" /var/log/messages
Jan 31 10:27:48 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 3
Jan 31 10:27:49 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 2
Jan 31 10:27:56 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 3
Jan 31 10:27:56 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 2
Jan 31 10:27:56 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 1
Jan 31 10:27:57 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 2
Jan 31 10:28:01 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 3
Jan 31 10:28:01 ovmsrv06 multipathd: 3600a0b8000299aa80000d08b55014119:
remaining active paths: 4
I'm not an expert of this storage array in particular, and of the rdac
hardware handler in general.
What I see is that multipath.conf on both nodes:
# VDSM REVISION 1.3
defaults {
polling_interval 5
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
}
devices {
device {
# These settings overrides built-in devices settings. It does not
apply
# to devices without built-in settings (these use the settings in
the
# "defaults" section), or to devices defined in the "devices"
section.
# Note: This is not available yet on Fedora 21. For more info see
# https://bugzilla.redhat.com/1253799
all_devs yes
no_path_retry fail
}
}
beginning of /proc/scsi/scsi
[root@ovmsrv06 ~]# cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 01 Id: 00 Lun: 00
Vendor: HP Model: LOGICAL VOLUME Rev: 1.86
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi0 Channel: 00 Id: 00 Lun: 01
Vendor: IBM Model: 1814 FAStT Rev: 0916
Type: Direct-Access ANSI SCSI revision: 05
...
To get default acquired config for this storage:
multpathd -k
> show config
I can see:
device {
vendor "IBM"
product "^1814"
product_blacklist "Universal Xport"
path_grouping_policy "group_by_prio"
path_checker "rdac"
features "0"
hardware_handler "1 rdac"
prio "rdac"
failback immediate
rr_weight "uniform"
no_path_retry "fail"
}
and
defaults {
verbosity 2
polling_interval 5
max_polling_interval 20
reassign_maps "yes"
multipath_dir "/lib64/multipath"
path_selector "service-time 0"
path_grouping_policy "failover"
uid_attribute "ID_SERIAL"
prio "const"
prio_args ""
features "0"
path_checker "directio"
alias_prefix "mpath"
failback "manual"
rr_min_io 1000
rr_min_io_rq 1
max_fds 4096
rr_weight "uniform"
no_path_retry "fail"
queue_without_daemon "no"
flush_on_last_del "yes"
user_friendly_names "no"
fast_io_fail_tmo 5
dev_loss_tmo 30
bindings_file "/etc/multipath/bindings"
wwids_file /etc/multipath/wwids
log_checker_err always
find_multipaths no
retain_attached_hw_handler no
detect_prio no
hw_str_match no
force_sync no
deferred_remove no
ignore_new_boot_devs no
skip_kpartx no
config_dir "/etc/multipath/conf.d"
delay_watch_checks no
delay_wait_checks no
retrigger_tries 3
retrigger_delay 10
missing_uev_wait_timeout 30
new_bindings_in_boot no
}
Any hint on how to tune multipath.conf so that a powering on server doesn't
create problems to running VMs?
Thanks in advance,
Gianluca
7 years, 8 months
ovirt ugrade to 4.1.1 problem with ovirt-engine-notifier.service
by Kapetanakis Giannis
Hi,
Centos 7 (no self hosted).
Trying to upgrade from 4.1.0 to 4.1.1 he setup fails to start ovirt-engine-notifier.service
● ovirt-engine-notifier.service - oVirt Engine Notifier
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-notifier.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-04-04 15:21:28 EEST; 30s ago
Process: 4489 ExecStart=/usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.py --redirect-output --systemd=notify $EXTRA_ARGS start (code=exited, status=1/FAILURE)
Main PID: 4489 (code=exited, status=1/FAILURE)
Apr 04 15:21:28 vmgr.cc.uoc.gr ovirt-engine-notifier.py[4489]: stderr: at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:198)
Apr 04 15:21:28 vmgr.cc.uoc.gr ovirt-engine-notifier.py[4489]: stderr: at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:363)
Apr 04 15:21:28 vmgr.cc.uoc.gr ovirt-engine-notifier.py[4489]: stderr: at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:351)
Apr 04 15:21:28 vmgr.cc.uoc.gr ovirt-engine-notifier.py[4489]: stderr: at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:93)
Apr 04 15:21:28 vmgr.cc.uoc.gr ovirt-engine-notifier.py[4489]: stderr: ... 7 more
Apr 04 15:21:28 vmgr.cc.uoc.gr ovirt-engine-notifier.py[4489]: Validation failed returncode is 1:
Apr 04 15:21:28 vmgr.cc.uoc.gr systemd[1]: ovirt-engine-notifier.service: main process exited, code=exited, status=1/FAILURE
Apr 04 15:21:28 vmgr.cc.uoc.gr systemd[1]: Failed to start oVirt Engine Notifier.
Apr 04 15:21:28 vmgr.cc.uoc.gr systemd[1]: Unit ovirt-engine-notifier.service entered failed state.
Apr 04 15:21:28 vmgr.cc.uoc.gr systemd[1]: ovirt-engine-notifier.service failed.
>From installation log:
2017-04-04 15:07:48 DEBUG otopi.context context._executeMethod:128 Stage closeup METHOD otopi.plugins.ovirt_engine_common.base.system.hostile_services.Plugin._closeup
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd systemd.exists:73 check if service ovirt-engine-notifier exists
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'show', '-p', 'LoadState', 'ovirt-engine-notifier.service'), executable='None', cwd='None', env=None
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'show', '-p', 'LoadState', 'ovirt-engine-notifier.service'), rc=0
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'show', '-p', 'LoadState', 'ovirt-engine-notifier.service') stdout:
LoadState=loaded
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'show', '-p', 'LoadState', 'ovirt-engine-notifier.service') stderr:
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd systemd.state:130 starting service ovirt-engine-notifier
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service'), executable='None', cwd='None', env=None
2017-04-04 15:07:49 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service'), rc=1
2017-04-04 15:07:49 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service') stdout:
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'show', '-p', 'LoadState', 'ovirt-engine-notifier.service') stderr:
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd systemd.state:130 starting service ovirt-engine-notifier
2017-04-04 15:07:48 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:813 execute: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service'), executable='None', cwd='None', env=None
2017-04-04 15:07:49 DEBUG otopi.plugins.otopi.services.systemd plugin.executeRaw:863 execute-result: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service'), rc=1
2017-04-04 15:07:49 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:921 execute-output: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service') stdout:
2017-04-04 15:07:49 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:926 execute-output: ('/bin/systemctl', 'start', 'ovirt-engine-notifier.service') stderr:
Job for ovirt-engine-notifier.service failed because the control process exited with error code. See "systemctl status ovirt-engine-notifier.service" and "journalctl -xe" for details.
2017-04-04 15:07:49 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-common/base/system/hostile_services.py", line 104, in _closeup
state=True
File "/usr/share/otopi/plugins/otopi/services/systemd.py", line 141, in state
service=name,
RuntimeError: Failed to start service 'ovirt-engine-notifier'
2017-04-04 15:07:49 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Closing up': Failed to start service 'ovirt-engine-notifier'
G
7 years, 8 months