slow performance with export storage on glusterfs
by Jiří Sléžka
This is a cryptographically signed message in MIME format.
--------------ms020509020901010502070100
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
Hi,
I am trying realize why is exporting of vm to export storage on
glusterfs such slow.
I am using oVirt and RHV, both instalations on version 4.1.7.
Hosts have dedicated nics for rhevm network - 1gbps, data storage itself
is on FC.
GlusterFS cluster lives separate on 4 dedicated hosts. It has slow disks
but I can achieve about 200-400mbit throughput in other applications (we
are using it for "cold" data, backups mostly).
I am using this glusterfs cluster as backend for export storage. When I
am exporting vm I can see only about 60-80mbit throughput.
What could be the bottleneck here?
Could it be qemu-img utility?
vdsm 97739 0.3 0.0 354212 29148 ? S<l 15:43 0:06
/usr/bin/qemu-img convert -p -t none -T none -f raw
/rhev/data-center/2ff6d0ee-a10b-473d-b77c-be9149945f5f/ff3cd56a-1005-4426=
-8137-8f422c0b47c1/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f-d=
6cf-48cc-ad0c-4a7d979c0c1e
-O raw
/rhev/data-center/mnt/glusterSD/10.20.30.41:_rhv__export/81094499-a392-4e=
a2-b081-7c6288fbb636/images/ba42cbcc-c068-4df8-af3d-00f2077b1e27/c57acd5f=
-d6cf-48cc-ad0c-4a7d979c0c1e
Any idea how to make it work faster or what throughput should I expected?=
Cheers,
Jiri
--------------ms020509020901010502070100
Content-Type: application/pkcs7-signature; name="smime.p7s"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="smime.p7s"
Content-Description: S/MIME Cryptographic Signature
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC
Cn8wggUJMIID8aADAgECAhACt8ndrdK9CetZxFyQDGB4MA0GCSqGSIb3DQEBCwUAMGUxCzAJ
BgNVBAYTAlVTMRUwEwYDVQQKEwxEaWdpQ2VydCBJbmMxGTAXBgNVBAsTEHd3dy5kaWdpY2Vy
dC5jb20xJDAiBgNVBAMTG0RpZ2lDZXJ0IEFzc3VyZWQgSUQgUm9vdCBDQTAeFw0xNDExMTgx
MjAwMDBaFw0yNDExMTgxMjAwMDBaMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1I
b2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMd
VEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCwp9Jj5Aej1xPkS1GV3LvBdemFmkUR//nSzBodqsU3dv2BCRD30r4gt5oRsYty
qDGF2nnItxV1SkwVoDxFeRzOIHYNYvBRHaiGvCQjEXzPRTocOSVfWpmq/zAL/QOEqpJogeM+
0IBGiJcAENJshl7UcfjYbBnN5qStk74f52VWFf/aiF7MVJnsUr3oriQvXYOzs8N/NXyyQyim
atBbumJVCNszF1X+XHCGfPNvxlNFW9ktv7azK0baminfLcsh6ubCdINZc+Nof2lU387NCDgg
oh3KsYVcZTSuhh7qp6MjxE5VqOZod1hpXXzDOkjK+DAMC57iZXssncp24eaN08VlAgMBAAGj
ggGmMIIBojASBgNVHRMBAf8ECDAGAQH/AgEAMA4GA1UdDwEB/wQEAwIBhjB5BggrBgEFBQcB
AQRtMGswJAYIKwYBBQUHMAGGGGh0dHA6Ly9vY3NwLmRpZ2ljZXJ0LmNvbTBDBggrBgEFBQcw
AoY3aHR0cDovL2NhY2VydHMuZGlnaWNlcnQuY29tL0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENB
LmNydDCBgQYDVR0fBHoweDA6oDigNoY0aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL0RpZ2lD
ZXJ0QXNzdXJlZElEUm9vdENBLmNybDA6oDigNoY0aHR0cDovL2NybDQuZGlnaWNlcnQuY29t
L0RpZ2lDZXJ0QXNzdXJlZElEUm9vdENBLmNybDA9BgNVHSAENjA0MDIGBFUdIAAwKjAoBggr
BgEFBQcCARYcaHR0cHM6Ly93d3cuZGlnaWNlcnQuY29tL0NQUzAdBgNVHQ4EFgQUjJ8RLubj
egSlHlWLRggEpu2XcKYwHwYDVR0jBBgwFoAUReuir/SSy4IxLVGLp6chnfNtyA8wDQYJKoZI
hvcNAQELBQADggEBAI5HEV91Oen8WHFCoJkeu2Av+b/kWTV2qH/YNI1Xsbou2hHKhh4IyNkF
OxA/TUiuK2qQnQ5hAS0TIrs9SJ1Ke+DjXd/cTBiw7lCYSW5hkzigFV+iSivninpItafWqYBS
WxITl1KHBS9YBskhEqO5GLliDMPiAgjqUBQ/H1qZMlZNQIuFu0UaFUQuZUpJFr4+0zpzPxsB
iWU2muAoGItwbaP55EYshM7+v/J+x6kIhAJt5Dng8fOmOvR9F6Vw2/E0EZ6oQ8g1fdhwM101
S1OI6J1tUil1r7ES/svNqVWVb7YkUEBcPo8ppfHnTI/uxsn2tslsWefsOGJxNYUUSMAb9Eow
ggVuMIIEVqADAgECAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBCwUAMHIxCzAJBgNV
BAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlBbXN0ZXJkYW0xDzAN
BgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBlcnNvbmFsIENBIDMw
HhcNMTcxMTE2MDAwMDAwWhcNMTgxMjE1MTIwMDAwWjCBlDETMBEGCgmSJomT8ixkARkWA29y
ZzEWMBQGCgmSJomT8ixkARkWBnRlcmVuYTETMBEGCgmSJomT8ixkARkWA3RjczELMAkGA1UE
BhMCQ1oxJTAjBgNVBAoTHFNpbGVzaWFuIFVuaXZlcnNpdHkgaW4gT3BhdmExHDAaBgNVBAMT
E0ppcmkgU2xlemthIHNsZTAwMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC/
VwOD1hlYL6l7GzxNqV1ne7/iMF/gHvPfTwejsC2s9sby7It82qXPRBVA2s1Cjb1A3ucpdlDN
MXM83Lvh881XfkxhS2YLLyiZDmlSzAqfoMLxQ2/E0m1UugttzGJF7/10pEwj0FJFhnIVwA/E
8svCcbhxwO9BBpUz8JG1C6fTd0qyzJtNXVyH+WuHQbU2jgu2JJ7miiEKE1Fis0hFf1rKxTzX
aVGyXiQLOn7TZDfPtXrJEG7eWYlFUP58edyuJELpWHTPHn8xJKYTy8Qq5BgFNyCRQT/6imsh
tZlDBZSEeqyoSNtLsC57ZrjqgtLCEQFK9EX27dOy0/u95zS0OIWdAgMBAAGjggHbMIIB1zAf
BgNVHSMEGDAWgBSMnxEu5uN6BKUeVYtGCASm7ZdwpjAdBgNVHQ4EFgQUF1mSlcyDz9wWit9V
jCz+zJ9CrpswDAYDVR0TAQH/BAIwADAdBgNVHREEFjAUgRJqaXJpLnNsZXprYUBzbHUuY3ow
DgYDVR0PAQH/BAQDAgSwMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDBDA0BgNVHSAE
LTArMAwGCiqGSIb3TAUCAgEwDAYKYIZIAYb9bAQfATANBgsqhkiG90wFAgMDAzCBhQYDVR0f
BH4wfDA8oDqgOIY2aHR0cDovL2NybDMuZGlnaWNlcnQuY29tL1RFUkVOQWVTY2llbmNlUGVy
c29uYWxDQTMuY3JsMDygOqA4hjZodHRwOi8vY3JsNC5kaWdpY2VydC5jb20vVEVSRU5BZVNj
aWVuY2VQZXJzb25hbENBMy5jcmwwewYIKwYBBQUHAQEEbzBtMCQGCCsGAQUFBzABhhhodHRw
Oi8vb2NzcC5kaWdpY2VydC5jb20wRQYIKwYBBQUHMAKGOWh0dHA6Ly9jYWNlcnRzLmRpZ2lj
ZXJ0LmNvbS9URVJFTkFlU2NpZW5jZVBlcnNvbmFsQ0EzLmNydDANBgkqhkiG9w0BAQsFAAOC
AQEADtFRxKphkcHVdWjR/+i1+cdHfkbicraHlU5Mpw8EX6nemKu4GGAWfzH+Y7p6ImZwUHWf
/SSbrX+57xaFUBOr3jktQm1GRmGUZESEmsUDB8UZXzdQC79/tO9MzRhvEBXuQhdxdoO64Efx
VqtYAB2ydqz7yWh56ioSwaQZEXo5rO1kZuAcmVz8Smd1r/Mur/h8Y+qbrsJng1GS25aMhFts
UV6z9zXuHFkT9Ck8SLdCEDzjzYNjXIDB5n+QOmPXnXrZMlGiI/aOqa5k5Sv6xCIPdH2kbpyd
M1YiH/ChmU9gWJvy0Jq42KGLvWBvuHEzcb3f473Fvn4GWsXu0zDS2oh2/TGCA8MwggO/AgEB
MIGGMHIxCzAJBgNVBAYTAk5MMRYwFAYDVQQIEw1Ob29yZC1Ib2xsYW5kMRIwEAYDVQQHEwlB
bXN0ZXJkYW0xDzANBgNVBAoTBlRFUkVOQTEmMCQGA1UEAxMdVEVSRU5BIGVTY2llbmNlIFBl
cnNvbmFsIENBIDMCEAp5saDxs6+cjJ85YCfhunMwDQYJYIZIAWUDBAIBBQCgggINMBgGCSqG
SIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE3MTEyMDE1MjAzM1owLwYJ
KoZIhvcNAQkEMSIEILmfZ4a/vSZv57MxXgCAPwjxh9wMgWdW8v78HTqqQ3zIMGwGCSqGSIb3
DQEJDzFfMF0wCwYJYIZIAWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG
9w0DAgICAIAwDQYIKoZIhvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwgZcGCSsG
AQQBgjcQBDGBiTCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDES
MBAGA1UEBxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBl
U2NpZW5jZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMIGZBgsqhkiG9w0BCRAC
CzGBiaCBhjByMQswCQYDVQQGEwJOTDEWMBQGA1UECBMNTm9vcmQtSG9sbGFuZDESMBAGA1UE
BxMJQW1zdGVyZGFtMQ8wDQYDVQQKEwZURVJFTkExJjAkBgNVBAMTHVRFUkVOQSBlU2NpZW5j
ZSBQZXJzb25hbCBDQSAzAhAKebGg8bOvnIyfOWAn4bpzMA0GCSqGSIb3DQEBAQUABIIBAGii
FiUp6Lb3R5JRu32BeDHrAGBL8Ym2pCdXh8zv4dYs3Q3dd/SQHkif4Z3qysUHfPAEQ2gz7N4C
W7ALZjblD0RZRy5JlrzGLfCmruPBKmU/cLIAYZk2v78+aEjgi8oiZHSLnhIwCRQheQtxK7rG
O8qkWpqZ2jmCstaBlj8kbpVb4WvNC7EbtEJLjXTP4rKAET2daeZdLqPaLx6eYND3l10GymKV
k4KLh0o638Eh3fmBLIncOh79nuURZl5Q6K7bflEQacQEzWDw9+9fGf5vWc8FzTzrn6w9RPwZ
NCcAVdR8SAU274u0cFWPnj+UywsJRKywJ31UWbQGU8ONy7MsAoAAAAAAAAA=
--------------ms020509020901010502070100--
7 years
Best practice for iSCSI storage domains
by Richard Chan
What is the best practice for iSCSI storage domains:
Many small targets vs a few large targets?
Specific example: if you wanted a 8TB storage domain would you prepare a
single 8TB LUN or (for example) 8 x 1 TB LUNs.
--
Richard Chan
7 years
Failed Install - oVirt Node "ovirt-node-ng-installer-ovirt-4.1-2017120607"
by andreil1
Hi,
I have strange error of oVrt node install "ovirt-node-ng-installer-ovirt-4.1-2017120607”
on HP ProLiant ML350 G6 with hardware RAID5 3.6TB just after startup and language selection:
anaconda 21.48.22.121-1
traceback …
…./parted/disk.py, line 246 in add partition
….
partition exception: unable to satisfy all constraints on the partition
I started this server with OpenSuSE Leap 42.3 DVD, RAID 5 volume is being seen correctly by partitioner as unformatted.
Installing CentOS 7 results in same error when automatic partitioning selected. May be for whatever reason installer can’t load “hpsa” module ?
How to solve this problem ?
Thanks in advance
Andrei
7 years
critical production issue for a vm
by Nathanaël Blanchet
Hi all,
I'm about to lose one very important vm. I shut down this vm for
maintenance and then I moved the four disks to a new created lun. This
vm has 2 snapshots.
After successful move, the vm refuses to start with this message:
Bad volume specification {u'index': 0, u'domainID':
u'961ea94a-aced-4dd0-a9f0-266ce1810177', 'reqsize': '0', u'format':
u'cow', u'bootOrder': u'1', u'discard': False, u'volumeID':
u'a0b6d5cb-db1e-4c25-aaaf-1bbee142c60b', 'apparentsize': '2147483648',
u'imageID': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', u'specParams': {},
u'readonly': u'false', u'iface': u'virtio', u'optional': u'false',
u'deviceId': u'4a95614e-bf1d-407c-aa72-2df414abcb7a', 'truesize':
'2147483648', u'poolID': u'48ca3019-9dbf-4ef3-98e9-08105d396350',
u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off',
u'type': u'disk'}.
I tried to merge the snaphots, export , clone from snapshot, copy disks,
or deactivate disks and every action fails when it is about disk.
I began to dd lv group to get a new vm intended to a standalone
libvirt/kvm, the vm quite boots up but it is an outdated version before
the first snapshot. There is a lot of disks when doing a "lvs | grep
961ea94a" supposed to be disks snapshots. Which of them must I choose to
get the last vm before shutting down? I'm not used to deal snapshot with
virsh/libvirt, so some help will be much appreciated.
Is there some unknown command to recover this vm into ovirt?
Thank you in advance.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
7 years
Host ovritnode1 installation failed. Command returned failure code 1 during SSH session 'root@X.X.X.X'.
by fung jacky
hi,
I encountered a problem.
I am getting an error when adding a host to ovirt-engine:Host ovritnode1
installation failed. Command returned failure code 1 during SSH session '
root(a)192.168.1.152'.
PS: all user and password is correct.
I checked engine log, the information is as follows:
2017-12-06 18:58:52,995-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Correlation
ID: 617b92e4, Call Stack: null, Custom ID: null, Custom Event ID: -1,
Message: Installing Host ovirtnode1. Stage: Environment setup.
2017-12-06 18:58:53,029-05 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS(509), Correlation
ID: 617b92e4, Call Stack: null, Custom ID: null, Custom Event ID: -1,
Message: Installing Host ovirtnode1. Stage: Environment packages setup.
2017-12-06 18:59:22,974-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511),
Correlation ID: 617b92e4, Call Stack: null, Custom ID: null, Custom Event
ID: -1, Message: *Failed to install Host ovirtnode1. Yum Cannot queue
package iproute: Cannot retrieve metalink for repository:
ovirt-4.1-epel/x86_64. Please verify its path and try again.*
2017-12-06 18:59:22,999-05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(VdsDeploy) [617b92e4] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511),
Correlation ID: 617b92e4, Call Stack: null, Custom ID: null, Custom Event
ID: -1, Message: *Failed to install Host ovirtnode1. Failed to execute
stage 'Environment packages setup': Cannot retrieve metalink for
repository: ovirt-4.1-epel/x86_64. Please verify its path and try again.*
How to solve this problem, please help analyze, thank you!
7 years
[ANN] oVirt 4.2.0 First Candidate Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Candidate Release of oVirt 4.2.0, as of December 5th, 2017
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first candidate release of the 4.2.0 version. This
release brings more than 280 enhancements and more than one thousand bug
fixes, including more than 500 high or urgent severity fixes, on top of
oVirt 4.1 series.
What's new in oVirt 4.2.0?
-
The Administration Portal has been completely redesigned using
Patternfly, a widely adopted standard in web application design. It now
features a cleaner, more intuitive design, for an improved user experience.
-
There is an all-new VM Portal for non-admin users.
-
A new High Performance virtual machine type has been added to the New VM
dialog box in the Administration Portal.
-
Open Virtual Network (OVN) adds support for Open vSwitch software
defined networking (SDN).
-
oVirt now supports Nvidia vGPU.
-
The ovirt-ansible-roles set of packages help users with common
administration tasks.
-
Virt-v2v now supports Debian/Ubuntu based VMs.
For more information about these and other features, check out the oVirt
4.2.0 blog post
<https://ovirt.org/blog/2017/11/introducing-ovirt-4.2.0-beta/> and stay
tuned for further blog post.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2 (available for x86_64 only)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available.
- oVirt Node is already available [4]
Additional Resources:
* Read more about the oVirt 4.2.0 release highlights:
http://www.ovirt.org/release/4.2.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.2.0/
[4] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
7 years
how to setup image-io-proxy after initially disabling it
by Gianluca Cecchi
Hello,
I'm on oVirt 4.1.7, the latest in 4.1 right now.
Initially in engine-setup when prompted I set Image I/O Proxy to false.
Configure Image I/O Proxy : False
Now instead I would like to enable it, but if I run engine-setup I can't
find a way to do it. I can only confirm the settings in a whole or exit the
setup...
How can I do?
Currently I have these packages already installed on the system
ovirt-imageio-proxy-setup-1.0.0-0.201701151456.git89ae3b4.el7.centos.noarch
ovirt-imageio-proxy-1.0.0-0.201701151456.git89ae3b4.el7.centos.noarch
and
[root@ovirt ~]# systemctl status ovirt-imageio-proxy
● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-proxy.service;
disabled; vendor preset: disabled)
Active: inactive (dead)
[root@ovirt ~]#
Can I simply and manually enable/start the service through systemctl
commands?
The same question arises in case I had not / had enabled VMConsole Proxy
and/or WebSocket Proxy during install and in a second time I want to enable
/ diable them.
Thanks,
Gianluca
7 years
Logical network setup with neutron
by Lakshmi Narasimhan Sundararajan
Hi Team,
I am looking at integrating openstack neutron with oVirt.
Reading the docs so far, and through my setup experiments, I can see that
oVirt and neutron do seem to understand each other.
But I need some helpful pointers to help me understand a few items
during configuration.
1) During External Provider registration,
a) although openstack keystone is currently supporting v3 api
endpoints, only configuring v2 works. I see an exception otherwise.I
have a feeling only v2 auth is supported with oVirt.
b) Interface mappings.
This I believe is a way for logical networks to switch/route traffic
back to physical networks. This is of the form label:interface. Where
label is placed on each Hosts network setting to point to the right
physical interface.
I did map label "red" when I setup Host networks to a physical Nic.
And used "red:br-red, green:br-green" here, wherein my intention is to
create a bridge br-red on each Host for this logical network and
switch/route packets over the "red" label mapped physical nic on each
host. And every vm attached to "red" logical network shall have a vnic
placed on "br-red" Is my understanding correct?
2) Now I finally create a logical network using external provider
"openstack neutron". Herein "Physical Network" parameter that I
totally do not understand.
If the registration were to have many interface mappings, is this a
way of pinning to the right interface?
I cannot choose, red, red:br-red... I can only leave it empty,
So what is the IP address of the physical address argument part of
logical network creation?
"Optionally select the Create on external provider check box. Select
the External Provider from the drop-down list and provide the IP
address of the Physical Network". What this field means?
I would appreciate some clarity and helpful pointers here.
Best regards
--
DISCLAIMER
The information in this e-mail is confidential and may be subject to legal
privilege. It is intended solely for the addressee. Access to this e-mail
by anyone else is unauthorized. If you have received this communication in
error, please address with the subject heading "Received in error," send to
it(a)msystechnologies.com, then delete the e-mail and destroy any copies of
it. If you are not the intended recipient, any disclosure, copying,
distribution or any action taken or omitted to be taken in reliance on it,
is prohibited and may be unlawful. The views, opinions, conclusions and
other information expressed in this electronic mail and any attachments are
not given or endorsed by the company unless otherwise indicated by an
authorized representative independent of this message.
MSys cannot guarantee that e-mail communications are secure or error-free,
as information could be intercepted, corrupted, amended, lost, destroyed,
arrive late or incomplete, or contain viruses, though all reasonable
precautions have been taken to ensure no viruses are present in this e-mail.
As our company cannot accept responsibility for any loss or damage arising
from the use of this e-mail or attachments we recommend that you subject
these to your virus checking procedures prior to use
7 years
Re: [ovirt-users] VDSM multipath.conf - prevent automatic management of local devices
by Ben Bradley
On 23/11/17 06:46, Maton, Brett wrote:
> Might not be quite what you're after but adding
>
> # RHEV PRIVATE
>
> To /etc/multipath.conf will stop vdsm from changing the file.
> |||
Hi there. Thanks for the reply.
Yes I am aware of that and it seems that's what I will have to do.
I have no problem with VDSM managing the file, I just wish it didn't
automatically load local storage devices into multipathd.
I'm still not clear on the purpose of this automatic management though.
From what I can tell there is no difference to hosts/clusters made
through this automatic management - i.e. you still have to add storage
domains manually in oVirt.
Could anyone give any info on the purpose of this auto-management of
local storage devices into multipathd in VDSM?
Then I will be able to make an informed decision as to the benefit of
letting it continue.
Thanks, Ben
>
> On 22 November 2017 at 22:42, Ben Bradley <listsbb(a)virtx.net
> <mailto:listsbb@virtx.net>> wrote:
>
> Hi All
>
> I have been running ovirt in a lab environment on CentOS 7 for
> several months but have only just got around to really testing things.
> I understand that VDSM manages multipath.conf and I understand that
> I can make changes to that file and set it to private to prevent
> VDSM making further changes.
>
> I don't mind VDSM managing the file but is it possible to set to
> prevent local devices being automatically added to multipathd?
>
> Many times I have had to flush local devices from multipath when
> they are added/removed or re-partitioned or the system is rebooted.
> It doesn't even look like oVirt does anything with these devices
> once they are setup in multipathd.
>
> I'm assuming it's the VDSM additions to multipath that are causing
> this. Can anyone else confirm this?
>
> Is there a way to prevent new or local devices being added
> automatically?
>
> Regards
> Ben
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
7 years