What is a data center with local storage?
by Christophe TREFOIS
Hi,
When creating a new data center, in oVirt 3.5 there is the options to have “local” or “shared” storage types.
Is there any resource out there that explains the difference between the two? The official doc does not really help there.
My current understanding is as follows:
In shared mode, I can create data domains that are shared between hosts in a same data center, eg NFS, iSCSI etc.
In lcoal mode, I can only create data domains locally, but I can “import” an existing iSCSI or Export domain to move VMs (with downtime) between data centers.
1. Is this correct or am I missing something here?
2. What would be the reason to go for a “local” storage type cluster?
Thank you very much for helping out a newcomer :)
Kind regards,
—
Christophe
9 years
import ova/ovf
by alireza sadeh seighalan
hi everyone
how can i import an ovf file from a server to my ovirt. thanks in advance
9 years
Failed to create live snapshot
by mots
------=_Part_16_134777541.1448296754211
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello,
I'm getting the following error when I try to create a snapshot of one VM. Snapshots of all other VMs work as expected. I'm using oVirt 3.5 on Centos 7.
>Failed to create live snapshot 'fsbu3' for VM 'Odoo'. VM restart is recommended. Note that using the created snapshot might cause data inconsistency.
I think this is the relevant part of vdsm.log, what strikes me as odd is the line:
>Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
The part "The base volume doesn't exist" seems interesting.
Also interesting is that it does create a snapshot, though I don't know if that snapshot is missing data.
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::103::Storage.TaskManager::(getTaskStatus) Entry. taskID: 21a1c403-f306-40b1-bad8-377d0265ebca
Thread-1192048::DEBUG::2015-11-23 17:18:20,421::taskManager::106::Storage.TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::taskManager::123::Storage.TaskManager::(getAllTasksStatuses) Return: {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::INFO::2015-11-23 17:18:20,422::logUtils::47::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::finished: {'allTasksStatus': {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::moving from state preparing -> state finished
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::task::993::Storage.TaskManager.Task::(_decref) Task=3D`ce3d857c-45d3-4acc-95a5-79484e457fc6`::ref 0 aborting False
Thread-1192048::DEBUG::2015-11-23 17:18:20,422::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getAllTasksStatuses' in bridge with {'21a1c403-f306-40b1-bad8-377d0265ebca': {'code': 0, 'message': '1 jobs completed successfully', 'taskState': 'finished', 'taskResult': 'success', 'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}}
Thread-1192048::DEBUG::2015-11-23 17:18:20,423::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,423::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,424::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192049::DEBUG::2015-11-23 17:18:20,426::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,438::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,439::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192050::DEBUG::2015-11-23 17:18:20,441::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,442::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,443::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192051::DEBUG::2015-11-23 17:18:20,445::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,529::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
Thread-1192052::DEBUG::2015-11-23 17:18:20,530::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'VM.snapshot' in bridge with {'vmID': '581cebb3-7729-4c29-b98c-f9e04aa2fdd0', 'snapDrives': [{'baseVolumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}]}
JsonRpcServer::DEBUG::2015-11-23 17:18:20,530::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192052::ERROR::2015-11-23 17:18:20,532::vm::4355::vm.Vm::(snapshot) vmId=3D`581cebb3-7729-4c29-b98c-f9e04aa2fdd0`::The base volume doesn't exist: {'device': 'disk', 'domainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'volumeID': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d'}
Thread-1192052::DEBUG::2015-11-23 17:18:20,532::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,588::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,590::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192053::DEBUG::2015-11-23 17:18:20,590::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Volume.getInfo' in bridge with {'imageID': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'storagepoolID': '00000002-0002-0002-0002-000000000354', 'volumeID': '16f92498-c142-4330-bc0f-c96f210c379d', 'storagedomainID': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,592::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state init -> state preparing
Thread-1192053::INFO::2015-11-23 17:18:20,592::logUtils::44::dispatcher::(wrapper) Run and protect: getVolumeInfo(sdUUID=3D'b4e7425a-53c7-40d4-befc-ea36ed7891fc', spUUID=3D'00000002-0002-0002-0002-000000000354', imgUUID=3D'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', volUUID=3D'16f92498-c142-4330-bc0f-c96f210c379d', options=3DNone)
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '3124' at 'getVolumeInfo'
Thread-1192053::DEBUG::2015-11-23 17:18:20,593::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' for lock type 'shared'
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free. Now locking as 'shared' (1 active user)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=3D`Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc`ReqID=3D`e6aed3a3-c95a-4106-9a16-ad21e5db3ae7`::Granted request
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::827::Storage.TaskManager.Task::(resourceAcquired) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::_resourcesAcquired: Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc (shared)
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 1 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,594::lvm::419::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1192053::DEBUG::2015-11-23 17:18:20,595::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3 obtain_device_list_from_udev=3D0 filter =3D [ '\''a|/dev/mapper/1p_storage_store1|'\'', '\''r|.*|'\'' ] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags b4e7425a-53c7-40d4-befc-ea36ed7891fc (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,731::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1192053::INFO::2015-11-23 17:18:20,734::volume::847::Storage.Volume::(getInfo) Info request: sdUUID=3Db4e7425a-53c7-40d4-befc-ea36ed7891fc imgUUID=3Ddfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d volUUID =3D 16f92498-c142-4330-bc0f-c96f210c379d=20
Thread-1192053::DEBUG::2015-11-23 17:18:20,734::blockVolume::594::Storage.Misc.excCmd::(getMetadata) /bin/dd iflag=3Ddirect skip=3D40 bs=3D512 if=3D/dev/b4e7425a-53c7-40d4-befc-ea36ed7891fc/metadata count=3D1 (cwd None)
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::blockVolume::594::Storage.Misc.excCmd::(getMetadata) SUCCESS: <err> =3D '1+0 records in\n1+0 records out\n512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s\n'; <rc> =3D 0
Thread-1192053::DEBUG::2015-11-23 17:18:20,745::misc::262::Storage.Misc::(validateDDBytes) err: ['1+0 records in', '1+0 records out', '512 bytes (512 B) copied, 0.000196717 s, 2.6 MB/s'], size: 512
Thread-1192053::INFO::2015-11-23 17:18:20,746::volume::875::Storage.Volume::(getInfo) b4e7425a-53c7-40d4-befc-ea36ed7891fc/dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d/16f92498-c142-4330-bc0f-c96f210c379d info is {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::INFO::2015-11-23 17:18:20,746::logUtils::47::dispatcher::(wrapper) Run and protect: getVolumeInfo, Return response: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::1191::Storage.TaskManager.Task::(prepare) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::finished: {'info': {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::task::595::Storage.TaskManager.Task::(_updateState) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::moving from state preparing -> state finished
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc': < ResourceRef 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', isValid: 'True' obj: 'None'>}
Thread-1192053::DEBUG::2015-11-23 17:18:20,746::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc'
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' (0 active users)
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc' is free, finding out if anyone is waiting for it.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.b4e7425a-53c7-40d4-befc-ea36ed7891fc', Clearing records.
Thread-1192053::DEBUG::2015-11-23 17:18:20,747::task::993::Storage.TaskManager.Task::(_decref) Task=3D`8f9455be-2f57-4b95-9d24-8db1522cbaed`::ref 0 aborting False
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Volume.getInfo' in bridge with {'status': 'OK', 'domain': 'b4e7425a-53c7-40d4-befc-ea36ed7891fc', 'voltype': 'LEAF', 'description': '', 'parent': '9a7fc7e0-60fc-4f67-9f97-2de4bc08f0a7', 'format': 'COW', 'image': 'dfa1d0bf-a1f6-45bb-9574-ab020c0e8c9d', 'ctime': '1448295499', 'disktype': '2', 'legality': 'LEGAL', 'allocType': 'SPARSE', 'mtime': '0', 'apparentsize': '1073741824', 'children': [], 'pool': '', 'capacity': '21474836480', 'uuid': '16f92498-c142-4330-bc0f-c96f210c379d', 'truesize': '1073741824', 'type': 'SPARSE'}
Thread-1192053::DEBUG::2015-11-23 17:18:20,748::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-11-23 17:18:20,863::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=3D'SEND'>
JsonRpcServer::DEBUG::2015-11-23 17:18:20,864::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1192054::DEBUG::2015-11-23 17:18:20,864::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {'taskID': '21a1c403-f306-40b1-bad8-377d0265ebca'}
------=_Part_16_134777541.1448296754211
Content-Type: application/pgp-signature; name=signature.asc
Content-Transfer-Encoding: 7bit
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: CIPHERMAIL (2.9.0-0)
iQIcBAEBCAAGBQJWU0EyAAoJEJ44dql+IcLKI0MP/3Asq2nBywx8kXJ0Gbt1wDYp
Yvakc220hhIxSKDCSuaWPoZHgBCMMi32po0lBrnyb/nq28O6vINL7EDNH5Did4fi
HIQ7JG/iyExgEwyWjhllM6d+yOIHYbIQnAqilxlnk/nBs98yXZn4LnYkaf9tDuXC
2VIYWisVM1VRkduGsIptkqN7qwfKek3NGwKBQbSL+o7CQNsCfSu7I7v7St+VyxH7
fSxRacILEaZmPSK7toSJKpZjabsmWEAbF1Finy2ni/qvnF984w/3b7Lmo0wmCbUG
2NeLveD5avQrO1RLMuSLPAO0/8aZnhPOL5ifYyM/f5WihFszmPn+72l17oMPxpwW
vpcpcsC2nq+ZTNvdR1LqYIzPlb201ohTv7YV5RJg4rU4uDfMlzv+0AEKJUlZgnmz
OLkfFH23OmmYMs6J/s/FDbsZBOZAdJpYwSdIWFyL3jfzqfoFwH0xUk6GKyZxAbRt
cMBUpDAtPoQ7qYJ+ctegY9XmexsnXhrRUTJWXTUBH7a/XKkgJbDIBLT5YRxACF33
tf1FIoc2f9RD3acdwAt/uF4wVQZI1CGZtmox0yk1dKDrPIfS3GQn6+9HZWRulT7W
han8iQLlyTN53sO4UKH8FHRjHqR0/JPmMXw4NR6r6CW6KA7Jf/m+9Bxh5sHVJXJt
kd8EaYpl165dFqUS+fy/
=fC70
-----END PGP SIGNATURE-----
------=_Part_16_134777541.1448296754211--
9 years
[Need help] Want to write my graduate thesis related to ovirt
by John Hunter
Hi guys,
I am college student who is going to graduate next year, major in CS, I
want to
work on my graduate thesis related to ovirt.
For now I need a project which I can work on for the next 6 month, so I need
an idealist which I can reference to write a proposal.
I have contributed to #dri-devel during my Google Summer of Code project,
so I
know how opensource groups works, though workflow are kind of different.
I am appreciate if anyone could help, and I hope some one could be my
mentor.
BR,
Zhao Junwang
--
Best regards
Junwang Zhao
Department of Computer Science &Technology
Peking University
Beijing, 100871, PRC
9 years
Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE
by Giuseppe Ragusa
On Mon, Nov 9, 2015, at 08:16, Sandro Bonazzola wrote:
> On Sun, Nov 8, 2015 at 9:57 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> On Tue, Nov 3, 2015, at 23:17, Giuseppe Ragusa wrote:
> On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
> > On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
> >> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
> >>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
> >>>> Hi all,
> >>>> I'm stuck with the following error during the final phase of ovirt-hosted-engine-setup:
> >>>>
> >>>> The host hosted_engine_1 is in non-operational state.
> >>>> Please try to activate it via the engine webadmin UI.
> >>>>
> >>>> If I login on the engine administration web UI I find the corresponding message (inside NonOperational first host hosted_engine_1 Events tab):
> >>>>
> >>>> Host hosted_engine_1 does not comply with the cluster Default networks, the following networks are missing on host: 'ovirtmgmt'
> >>>>
> >>>> I'm installing with an oVirt snapshot from October the 27th on a fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine-vm) pre-created and network interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on underlying 802.3ad bonds or plain interfaces) manually pre-configured in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service; NetworkManager disabled).
> >>>>
> >>>
> >>> If you manually created the network bridges, the match between them and the logical network should happen on name bases.
> >>
> >> Hi Simone,
> >> many thanks fpr your help (again) :)
> >>
> >> As you may note from the above comment, the name should actually match (it's exactly ovirtmgmt) but it doesn't get recognized.
> >>
> >>
> >>> If it doesn't for any reasons (please report if you find any evidence), you can manually bind logical network and network interfaces editing the host properties from the web-ui. At that point the host should become active in a few seconds.
> >>
> >>
> >> Well, the most immediate evidence are the error messages already reported (given that the bridge is actually present, with the right name and actually working).
> >> Apart from that, I find the following past logs (I don't know whether they are relevant or not):
> >>
> >> From /var/log/vdsm/connectivity.log:
> >
> >
> > Can you please add also host-deploy logs?
>
> Please find a gzipped tar archive of the whole directory /var/log/ovirt-engine/host-deploy/ at:
>
> https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110&authkey=!AIQUc...
>>
>> Since I suppose that there's nothing relevant on those logs, I'm planning to specify "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restart VDSM on the host, then making the (still blocked) setup re-check.
>>
>>
Is there anything I should pay attention to before proceeding? (in particular while restarting VDSM)
>
>
> ^^ Dan?
I went on and unfortunately "net_persistence = ifcfg" in /etc/vdsm/vdsm.conf and restarting VDSM on the host did not solve it (same error as before).
While trying (always without success) all other steps suggested by Simone (binding logical network and synchronizing networks from host) I found an interesting-looking libvirt network definition (autostart too) for vdsm-ovirtmgmt and this recalled some memories from past mailing list messages (that I still cannot find...) ;)
Long story short: aborting setup, cleaning up all and creating a libvirt network for each pre-provisioned bridge worked! ("net_persistence = ifcfg" has been kept for other, client-specific, reasons so I don't know if it's needed too)
Here it is, in BASH form:
for my_bridge in ovirtmgmt bridge1 bridge2; do
cat <<- EOM > /root/my-${my_bridge}.xml
<network>
<name>vdsm-${my_bridge}</name>
<forward mode='bridge'/>
<bridge name='${my_bridge}'/>
</network>
EOM
virsh -c qemu:///system net-define /root/my-${my_bridge}.xml
virsh -c qemu:///system net-autostart vdsm-${my_bridge}
rm -f /root/my-${my_bridge}.xml
done
I was able to connect (with the "virsh" commands above) to libvirtd (which must be running for the above to work) by removing the VDSM-added config fragment, allowing tcp connections and denying TLS-only connections in /etc/libvirt/libvirtd.conf and finally by removing /etc/sasl2/libvirt.conf (all these modifications must be reverted after configuring networks then stopping libvirtd and before relaunching setup).
Many thanks again for suggestions, hints etc.
Regards,
Giuseppe
>>
I will report back here on the results.
>>
>>
Regards,
>>
Giuseppe
>>
>>
> Many thanks again for your kind assistance.
>>
>
>>
> Regards,
>>
> Giuseppe
>>
>
>>
> >> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
>>
> >> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
>>
> >> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) d
>>
> >> ropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up spee
>>
> >> d:0 duplex:full)
>>
> >> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
>>
> >> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
>>
> >> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>>
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>>
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>>
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>>
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>>
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>>
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown)
>>
> >> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
>>
> >> duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(opers
>>
> >> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
>>
> >> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>>
> >> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>>
> >> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>>
> >> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>>
> >> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>>
> >> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>>
> >> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-01 22:58:16,215:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 00:04:54,019:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) dropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 00:05:39,102:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>>
> >> 2015-11-02 01:16:47,194:DEBUG:recent_client:True
>>
> >> 2015-11-02 01:17:32,693:DEBUG:recent_client:True, vnet0:(operstate:up speed:0 duplex:full), lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), vnet2:(operstate:up speed:0 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), vnet1:(operstate:up speed:0 duplex:full), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>>
> >> 2015-11-02 01:18:02,749:DEBUG:recent_client:False
>>
> >> 2015-11-02 01:20:18,001:DEBUG:recent_client:True
>>
> >>
>>
> >> From /var/log/vdsm/vdsm.log:
>>
> >>
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,991::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,992::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,994::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,995::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,005::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,006::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,007::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,008::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,009::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,010::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,014::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,015::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
> >> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
> >>
>>
> >> And further down, always in /var/log/vdsm/vdsm.log:
>>
> >>
>>
> >> Thread-17::DEBUG::2015-11-02 01:17:18,747::__init__::533::jsonrpc.JsonRpcServer:
>>
> >> :(_serveRequest) Return 'Host.getCapabilities' in bridge with {'HBAInventory': {
>>
> >> 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5'}], 'FC': []}, '
>>
> >> packages2': {'kernel': {'release': '229.14.1.el7.x86_64', 'buildtime': 144232235
>>
> >> 1.0, 'version': '3.10.0'}, 'glusterfs-rdma': {'release': '1.el7', 'buildtime': 1
>>
> >> 444235292L, 'version': '3.7.5'}, 'glusterfs-fuse': {'release': '1.el7', 'buildti
>>
> >> me': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7_1.3',
>>
> >> 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release': '2.el7',
>>
> >> 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '2.gitdbbc5a
>>
> >> 4.el7', 'buildtime': 1445459370L, 'version': '4.17.10'}, 'qemu-kvm': {'release':
>>
> >> '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'glusterfs': {'r
>>
> >> elease': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'libvirt': {'release': '16.el7_1.4', 'buildtime': 1442325910L, 'version': '1.2.8'}, 'qemu-img': {'release': '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'mom': {'release': '2.el7', 'buildtime': 1442501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Atom(TM) CPU C2750 @ 2.40GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '0'}, 'liveSnapshot': 'true', 'kdumpStatus': 0, 'networks': {}, 'bridges': {'ovirtmgmt': {'addr': '172.25.10.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.10.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.10.21/24'[http://172.25.10.21/24%27][http://172.25.10.21/24%27%5Bhttp://172.25.10.2..., 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0', 'vnet0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '83', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb37', 'bridge_id': '8000.002590f1cb37', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'lan': {'addr': '192.168.164.218', 'cfg': {'AGEING': '0', 'IPADDR': '192.168.164.218', 'GATEWAY': '192.168.164.254', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'lan', 'IPV4_FAILURE_FATAL': 'yes', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'gateway': '192.168.164.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['192.168.164.218/24', '192.168.164.216/24'[http://192.168.164.216/24%27][http://192.168.164.216/24%27%5Bhttp://192.1..., 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['vnet1', 'enp6s0f0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '82', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.a0369f3888cd', 'bridge_id': '8000.a0369f3888cd', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '82', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'nfs': {'addr': '172.25.15.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.15.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'nfs', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.15.21/24', '172.25.15.203/24'[http://172.25.15.203/24%27][http://172.25.15.203/24%27%5Bhttp://172.25.15..., 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1', 'vnet2'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '183', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb35', 'bridge_id': '8000.002590f1cb35', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': '2a1855a9-18fb-4d7a-b8b8-6fc898a8e827', 'onlineCpus': '0,1,2,3,4,5,6,7', 'nics': {'enp0s20f1': {'permhwaddr': '00:25:90:f1:cb:35', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp0s20f1', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': '00:25:90:F1:CB:35', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp7s0f0': {'permhwaddr': 'a0:36:9f:38:88:cf', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp7s0f0', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': 'A0:36:9F:38:88:CF', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp6s0f0': {'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'BRIDGE': 'lan', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp6s0f0', 'BOOTPROTO': 'none', 'HWADDR': 'A0:36:9F:38:88:CD', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': 'a0:36:9f:38:88:cd', 'speed': 100, 'gateway': ''}, 'enp6s0f1': {'permhwaddr': 'a0:36:9f:38:88:cc', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp6s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': 'A0:36:9F:38:88:CC', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp7s0f1': {'permhwaddr': 'a0:36:9f:38:88:ce', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp7s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': 'A0:36:9F:38:88:CE', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f0': {'permhwaddr': '00:25:90:f1:cb:34', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f0', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:34', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f3': {'permhwaddr': '00:25:90:f1:cb:37', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f3', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': '00:25:90:F1:CB:37', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp0s20f2': {'permhwaddr': '00:25:90:f1:cb:36', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f2', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:36', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway'
>>
> >> : ''}}, 'software_revision': '2', 'hostdevPassthrough': 'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,movbe,popcnt,tsc_deadline_timer,aes,rdrand,lahf_lm,3dnowprefetch,ida,arat,epb,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,tsc_adjust,smep,erms,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=balance-rr miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'active_slave': '', 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f3', 'enp6s0f1'], 'hwaddr': '00:25:90:f1:cb:37', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100'}}, 'bond1': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'nfs', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f1', 'enp7s0f0'], 'hwaddr': '00:25:90:f1:cb:35', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}, 'bond2': {'ipv4addrs': ['172.25.5.21/24'[http://172.25.5.21/24%27][http://172.25.5.21/24%27%5Bhttp://172.25.5.21/2..., 'addr': '172.25.5.21', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '172.25.5.21', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond2', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb34/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'slaves': ['enp0s20f0', 'enp0s20f2', 'enp7s0f1'], 'hwaddr': '00:25:90:f1:cb:34', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}}, 'software_version': '4.17', 'memSize': '16021', 'cpuSpeed': '2401.000', 'numaNodes': {'0': {'totalMemory': '16021', 'cpus': [0, 1, 2, 3, 4, 5, 6, 7]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'ovirtmgmt', 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads': '8', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc', 'pc-q35-rhel7.1.0', 'q35', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0', 'pc-i440fx-rhel7.0.0'], 'rngSources': ['random'], 'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'}}
>>
> >>
>>
> >> Navigating the Admin web UI offered by the engine, editing (with the "Edit" button or using the corresponding context menu entry) the hosted_engine_1 host, I do not find any way to associate the logical oVirt ovirtmgmt network to the already present ovirtmgmt Linux bridge.
>>
> >> Furthermore, the "Network Interfaces" tab of the aforementioned host shows only plain interfaces and bonds (all marked with a down-pointing red arrow, even if they are actually up and running), but not the already defined Linux bridges; inside this tab I find two buttons: "Setup Host Networks" (which would allow me to drag-and-drop-associate the ovirtmgmt logical network to an already present bond, like the right one: bond0, but I avoid it, since I fear it would try to create a bridge from scratch, while it's actually present now and it already has the host address assigned on top, allowing engine-host communication at the moment) and "Sync All Networks" (which actively scares me with a threatening "Are you sure you want to synchronize all host's networks?", which I deny, since it's view is already wrong and it's absolutely not clear in which direction the synchronization would go).
>>
> >>
>>
> >> So, it seems to me that either I need to perform on the host further pre-configuration steps for the ovirtmgmt bridge (beyond the ifcfg-* setup) or there is a bug in the setup/adminportal (a UI/usability bug, maybe) :)
>>
> >>
>>
> >> Many thanks again for your help.
>>
> >>
>>
> >> Kind regards,
>>
> >> Giuseppe
>>
> >>
>>
> >>
>>
> >>> When the host will become active you'll can continue with hosted-engine-setup.
>>
> >>>
>>
> >>>> I seem to recall that a preconfigured network setup on oVirt 3.6 would need something predefined on the libvirt side too (apart from usual ifcfg-* files), but I cannot find the relevant mailing list message anymore nor any other specific documentation.
>>
> >>>>
>>
> >>>>
>>
> Does anyone have any further suggestion or clue (code/docs to read)?
>>
> >>>>
>>
> >>>>
>>
> Many thanks in advance.
>>
> >>>>
>>
> >>>>
>>
> Kind regards,
>>
> >>>>
>>
> Giuseppe
>>
> >>>>
>>
> >>>>
>>
> PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>
> >>>>
>>
> _______________________________________________
>>
> >>>>
>>
> Users mailing list
>>
> >>>> Users(a)ovirt.org
>>
> >>>> http://lists.ovirt.org/mailman/listinfo/users
>>
> >>
>>
> >>
>>
_______________________________________________
>>
Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
9 years
Re: [ovirt-users] oVirt 4.0 wishlist: VDSM
by Giuseppe Ragusa
On Sat, Nov 21, 2015, at 13:59, Dan Kenigsberg wrote:
> On Fri, Nov 20, 2015 at 01:54:35PM +0100, Giuseppe Ragusa wrote:
> > Hi all,
> > I go on with my wishlist, derived from both solitary mumblings and community talks at the the first Italian oVirt Meetup.
> >
> > I offer to help in coding (work/family schedules permitting) but keep in mind that I'm a sysadmin with mainly C and bash-scripting skills (but hoping to improve my less-than-newbie Python too...)
> >
> > I've sent separate wishlist messages for oVirt Node and Engine.
> >
> > VDSM:
> >
> > *) allow VDSM to configure/manage Samba, CTDB and Ganesha (specifically, I'm thinking of the GlusterFS integration); there are related wishlist items on configuring/managing Samba/CTDB/Ganesha on the Engine and on oVirt Node
>
> I'd apreciate a more detailed feature definition. Vdsm (and ovirt) try
> to configure only thing that are needed for their own usage. What do you
> want to control? When? You're welcome to draf a feature page prior to
> coding the fix ;-)
I was thinking of adding CIFS/NFSv4 functionality to an hyperconverged cluster (GlusterFS/oVirt) which would have separate volumes for virtual machines storage (one volume for the Engine and one for other vms, with no CIFS/NFSv4 capabilities offered) and for data shares (directly accessible by clients on LAN and obviously from local vms too).
Think of it as a 3-node HA NetApp+VMware killer ;-)
The UI idea (but that would be the Engine part, I understand) was along the lines of single-check enabling CIFS and/or NFSv4 sharing for a GlusterFS data volume, then optionally adding any further specific options (hosts allowed, users/groups for read/write access, network recycle_bin etc.); global Samba (domain/workgroup membership etc.) and CTDB (IPs/interfaces) configuration parameters would be needed too.
I have no experience on a GaneshaNFS clustered/HA configuration with GlusterFS, but (from superficial skimming through docs) it seems that it was not possible at all before 2.2 and now it needs a full Pacemaker/Corosync setup too (contrary to the IBM-GPFS-backed case), so that could be a problem.
This VDSM wishlist item was driven by the idea that all actions (and so future GlusterFS/Samba/CTDB too) performed by the Engine through the hosts/nodes were somehow "mediated" by VDSM and its API, but if this is not the case, then I retire my suggestion here and I will try to pursue it only on the Engine/Node side ;)
Many thanks for your attention.
Regards,
Giuseppe
> > *) add Open vSwitch direct support (not Neutron-mediated); there are related wishlist items on configuring/managing Open vSwitch on oVirt Node and on the Engine
>
> That's on our immediate roadmap. Soon, vdsm-hook-ovs would be ready for
> testing.
>
> >
> > *) add DRBD9 as a supported Storage Domain type; there are related wishlist items on configuring/managing DRBD9 on the Engine and on oVirt Node
> >
> > *) allow VDSM to configure/manage containers (maybe extend it by use of the LXC libvirt driver, similarly to the experimental work that has been put up to allow Xen vm management); there are related wishlist items on configuring/managing containers on the Engine and on oVirt Node
> >
> > *) add a VDSM_remote mode (for lack of a better name, but mainly inspired by pacemaker_remote) to be used inside a guest by the above mentioned container support (giving to the Engine the required visibility on the managed containers, but excluding the "virtual node" from power management and other unsuitable actions)
> >
> > Regards,
> > Giuseppe
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
9 years
FOSDEM16 Virt & IaaS Devroom CFP Extension, Speaker Mentoring, CoC
by Mikey Ariel
The CFP for the Virtualization & IaaS devroom at FOSDEM 2016 is in full
swing, and we'd like to share a few updates with you:
-------------------------
Speaker Mentoring Program
-------------------------
As a part of the rising efforts to grow our communities and encourage a
diverse and inclusive conference ecosystem, we're happy to announce that
we'll be offering mentoring for newcomer speakers. Our mentors can help
you with tasks such as reviewing your abstract, reviewing your
presentation outline or slides, or practicing your talk with you.
You may apply to the mentoring program as a newcomer speaker if you:
* Never presented before or
* Presented only lightning talks or
* Presented full-length talks at small meetups (<50 ppl)
Submission guidelines:
* Mentored presentations will have 25-minute slots, where 20 minutes
will include the presentation and 5 minutes will be reserved for questions.
* The number of newcomer session slots is limited, so we will probably
not be able to accept all applications.
* You must submit your talk and abstract to apply for the mentoring
program, our mentors are volunteering their time and will happily
provide feedback but won't write your presentation for you! If you are
experiencing problems with Pentabarf, the proposal submission interface,
or have other questions, you can email iaas-virt-devroom at
lists.fosdem.org and we will try to help you.
How to apply:
* Follow the same procedure to submit an abstract in Pentabarf as
standard sessions. Instructions can be found in our original CFP
announcement:
http://community.redhat.com/blog/2015/10/call-for-proposals-fosdem16-virt...
* In addition to agreeing to video recording and confirming that you can
attend FOSDEM in case your session is accepted, please write "speaker
mentoring program application" in the "Submission notes" field, and list
any prior speaking experience or other relevant information for your
application.
Call for mentors!
Interested in mentoring newcomer speakers? We'd love to have your help!
Please email iaas-virt-devroom at lists.fosdem.org with a short speaker
bio and any specific fields of expertise (for example, KVM, OpenStack,
storage, etc) so that we can match you with a newcomer speaker from a
similar field. Estimated time investment can be as low as a 5-10 hours
in total, usually distributed weekly or bi-weekly.
Never mentored a newcomer speaker but interested to try? Our mentoring
program coordinator will be happy to answer your questions and give you
tips on how to optimize the mentoring process. Email us and we'll be
happy to answer your questions!
-------------------------
CFP Deadline Extension
-------------------------
To help accommodate the newcomer speaker proposals, we have decided to
extend the deadline for submitting proposals by one week.
The new deadline is **TUESDAY, DECEMBER 8 @ midnight CET**.
-------------------------
Code of Conduct
-------------------------
Following the release of the updated code of conduct for FOSDEM[1], we'd
like to remind all speakers and attendees that all of the presentations
and discussions in our devroom are held under the guidelines set in the
CoC and we expect attendees, speakers, and volunteers to follow the CoC
at all times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about
the CoC or wish to have one of the devroom organizers review your
presentation slides or any other content for CoC compliance, please
email iaas-virt-devroom at lists.fosdem.org and we will do our best to
help you out.
[1] https://www.fosdem.org/2016/practical/conduct/
--
Mikey Ariel
Community Lead, oVirt
www.ovirt.org
"To be is to do" (Socrates)
"To do is to be" (Jean-Paul Sartre)
"Do be do be do" (Frank Sinatra)
Mobile: +420-702-131-141
IRC: mariel / thatdocslady
Twitter: @ThatDocsLady
9 years
Cannot setup Networks. The address of the network 'NFS' cannot be modified
by Ihor Piddubnyak
Trying to change IP for VLAN interface attached to hypervisor, getting
vhi2:
Cannot setup Networks. The address of the network 'NFS' cannot be
modified without reinstalling the host, since this address was used to
create the host's certification.
Host reinstall does not help. Any clue how to do it?
--
Ihor Piddubnyak <ip(a)surftown.com>
surftown a/s
9 years
[3.6] User can't create a VM. No permission for EDIT_ADMIN_VM_PROPERTIES
by Maksim Naumov
Hello
I faced with the problem. The user can;t create a VM. The user has
PowerUserRole on Cluster. He tried to create a VM with a base template and
had no success.
Here some lines from log. Have no idea why it wants
for EDIT_ADMIN_VM_PROPERTIES permission for user?
2015-11-20 16:42:10,888 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Checking whether user
'acc9ced5-a764-4d60-84d7-db4b4a498a18' or one of the groups he is member
of, have the following permissions: ID:
a303bbca-af20-4de5-9eff-01c52d3bf615 Type: VdsGroupsAction group CREATE_VM
with role type USER, ID: 00000000-0000-0000-0000-000000000000 Type:
VmTemplateAction group CREATE_VM with role type USER, ID:
a303bbca-af20-4de5-9eff-01c52d3bf615 Type: VdsGroupsAction group
EDIT_ADMIN_VM_PROPERTIES with role type ADMIN
2015-11-20 16:42:10,890 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Found permission
'129c57bb-df56-4529-93d9-52db0265263f' for user when running 'AddVm', on
'Cluster' with id 'a303bbca-af20-4de5-9eff-01c52d3bf615'
2015-11-20 16:42:10,893 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] Found permission
'00000004-0004-0004-0004-000000000355' for user when running 'AddVm', on
'Template' with id '00000000-0000-0000-0000-000000000000'
2015-11-20 16:42:10,894 DEBUG [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] No permission found for user when running
action 'AddVm', on object 'Cluster' for action group
'EDIT_ADMIN_VM_PROPERTIES' with id 'a303bbca-af20-4de5-9eff-01c52d3bf615'.
2015-11-20 16:42:10,894 WARN [org.ovirt.engine.core.bll.AddVmCommand]
(default task-160) [2f0eb905] CanDoAction of action 'AddVm' failed for user
vincent.engel@hitmeister.de(a)hitmeister.de. Reasons:
VAR__ACTION__ADD,VAR__TYPE__VM,USER_NOT_AUTHORIZED_TO_PERFORM_ACTION
--
Maksim Naumov
Hitmeister GmbH
Softwareentwickler
Habsburgerring 2
50674 Köln
E: maksim.naumov(a)hitmeister.de
www.hitmeister.de
HRB 59046, Amtsgericht Köln
Geschäftsführer: Dr. Gerald Schönbucher
9 years
Re: [ovirt-users] Allowing a user to manage all machines in a pool
by Nicolás
----_com.android.email_3020879656702700
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
QW55IGhpbnRzIHRvIHRoaXMsIHBsZWFzZT8KCi0tLS0tLS0tIE1lbnNhamUgb3JpZ2luYWwgLS0t
LS0tLS0KRGU6IE5pY29sw6FzIDxuaWNvbGFzQGRldmVscy5lcz4gCkZlY2hhOjIwLzExLzIwMTUg
IDE4OjM5ICAoR01UKzAwOjAwKSAKUGFyYTogdXNlcnNAb3ZpcnQub3JnIApBc3VudG86IFtvdmly
dC11c2Vyc10gQWxsb3dpbmcgYSB1c2VyIHRvIG1hbmFnZSBhbGwgbWFjaGluZXMgaW4gYSBwb29s
IAoKSGksCgpXZSdyZSBydW5uaW5nIG9WaXJ0IDMuNS4zLjEtMSwgYW5kIHdlJ3JlIGN1cnJlbnRs
eSBkZXBsb3lpbmcgc29tZSBQb29scyAKZm9yIHN0dWRlbnRzIGFuZCB0ZWFjaGVycywgc28gZWFj
aCBoYXMgYWNjZXNzIHRvIG9uZSBtYWNoaW5lIGluIHRoZSAKcG9vbC4gVGh1cywgZWFjaCBvZiB0
aGVtIGlzIGdyYW50ZWQgdGhlIFVzZXJSb2xlIGluIHRoZSBwb29sLiBOb3cgdGhlIAp0ZWFjaGVy
IGlzIGFza2luZyB1cyB0byBhbGxvdyBoaW0gYWNjZXNzIHRvIGFsbCBzdHVkZW50cycgVk1zIHZp
YSB0aGUgCldlYiBHVUkgdG8gZXZhbHVhdGUgdGhlaXIgd29yay4KCklzIHRoZXJlIGEgcGVybWlz
c2lvbiB0byBhY2NvbXBsaXNoIHRoYXQ/IEluIHdvcnN0IG9mIGNhc2VzIEkgd2lsbCAKZGV0YWNo
IHRoZSBWTXMgZnJvbSB0aGUgcG9vbCBhbmQgZ3JhbnQgdGhlIHRlYWNoZXIgdGhlIFVzZXJSb2xl
IG9uIGVhY2ggCm9mIHRoZW0sIGJ1dCBJJ2QgbGlrZSB0byBrbm93IGlmIHRoZXJlJ3MgYSAiY2xl
YW5lciIgd2F5LgoKVGhhbmtzLgoKUmVnYXJkcywKCk5pY29sw6FzCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClVzZXJzIG1haWxpbmcgbGlzdApVc2Vyc0Bv
dmlydC5vcmcKaHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJzCg==
----_com.android.email_3020879656702700
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+QW55IGhpbnRzIHRvIHRoaXMsIHBs
ZWFzZT88YnI+PGJyPi0tLS0tLS0tIE1lbnNhamUgb3JpZ2luYWwgLS0tLS0tLS08YnI+RGU6IE5p
Y29sw6FzIDxuaWNvbGFzQGRldmVscy5lcz4gPGJyPkZlY2hhOjIwLzExLzIwMTUgIDE4OjM5ICAo
R01UKzAwOjAwKSA8YnI+UGFyYTogdXNlcnNAb3ZpcnQub3JnIDxicj5Bc3VudG86IFtvdmlydC11
c2Vyc10gQWxsb3dpbmcgYSB1c2VyIHRvIG1hbmFnZSBhbGwgbWFjaGluZXMgaW4gYSBwb29sIDxi
cj48YnI+SGksPGJyPjxicj5XZSdyZSBydW5uaW5nIG9WaXJ0IDMuNS4zLjEtMSwgYW5kIHdlJ3Jl
IGN1cnJlbnRseSBkZXBsb3lpbmcgc29tZSBQb29scyA8YnI+Zm9yIHN0dWRlbnRzIGFuZCB0ZWFj
aGVycywgc28gZWFjaCBoYXMgYWNjZXNzIHRvIG9uZSBtYWNoaW5lIGluIHRoZSA8YnI+cG9vbC4g
VGh1cywgZWFjaCBvZiB0aGVtIGlzIGdyYW50ZWQgdGhlIFVzZXJSb2xlIGluIHRoZSBwb29sLiBO
b3cgdGhlIDxicj50ZWFjaGVyIGlzIGFza2luZyB1cyB0byBhbGxvdyBoaW0gYWNjZXNzIHRvIGFs
bCBzdHVkZW50cycgVk1zIHZpYSB0aGUgPGJyPldlYiBHVUkgdG8gZXZhbHVhdGUgdGhlaXIgd29y
ay48YnI+PGJyPklzIHRoZXJlIGEgcGVybWlzc2lvbiB0byBhY2NvbXBsaXNoIHRoYXQ/IEluIHdv
cnN0IG9mIGNhc2VzIEkgd2lsbCA8YnI+ZGV0YWNoIHRoZSBWTXMgZnJvbSB0aGUgcG9vbCBhbmQg
Z3JhbnQgdGhlIHRlYWNoZXIgdGhlIFVzZXJSb2xlIG9uIGVhY2ggPGJyPm9mIHRoZW0sIGJ1dCBJ
J2QgbGlrZSB0byBrbm93IGlmIHRoZXJlJ3MgYSAiY2xlYW5lciIgd2F5Ljxicj48YnI+VGhhbmtz
Ljxicj48YnI+UmVnYXJkcyw8YnI+PGJyPk5pY29sw6FzPGJyPl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fPGJyPlVzZXJzIG1haWxpbmcgbGlzdDxicj5Vc2Vy
c0BvdmlydC5vcmc8YnI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3Vz
ZXJzPGJyPjwvYm9keT4=
----_com.android.email_3020879656702700--
9 years