Re: Ovirt Glusterfs
by Strahil
Hi Sahina,
Thanks for your reply.
Let me share my test results with gluster v3 .
I have a 3-node hyperconverged setup with 1 Gbit/s network and SSD (sata-based) for LVM caching.
Testing the bricks showed higher than the network performance.
1. Tested ovirt 4.2.7/4.2.8 with fuse mounts. Using 'dd if=/dev/zero of=/default/gluster/Mount point/from/ovirt bs=1M count=5000'.
Results: 56MB/s directly on the mount point, 20+-2 MB/s from a VM.
Reads on fuse mount point -> 500+ MB/s
Disabling sharing increased performance from the fuse mount point, nothing beneficial from a VM.
Converting the bricks of a volume to 'tmpfs' does not bring any performance gain for FUSE mount.
2. Tested ovirt 4.2.7/4.2.8 with gfapi - performance in VM -> approx 30 MB/s
3. Gluster native NFS (now deprecated) on ovirt 4.2.7/4.2.8 -> 120MB/s on mount point, 100+ MB/s in VM
My current setup:
Storhaug + ctdb + nfs-ganesha (ovirt 4.2.7/4.2.8) -> 80+-2 MB/s in the VM. Reads are around the same speed.
Sadly, I didn't have the time to test performance on gluster v5 (ovirt 4.3.0) , but I haven't noticed any performance gain for the engine.
My suspicion with FUSE is that when a gluster node is also playing the role of a client -> it still uses network bandwidth to communicate to itself, but I could be wrong.
According to some people on the gluster lists, the FUSE performance is expected ,but my tests with disabled sharing shows better performance.
Most of the time 'gtop' does not show any spikes and iftop shows that network usage is not going over 500 Mbit/s
As I hit some issues with the deployment on 4.2.7 , I decided to stop my tests for now.
Best Regards,
Strahil Nikolov
On Feb 25, 2019 09:17, Sahina Bose <sabose(a)redhat.com> wrote:
>
> The options set on the gluster volume are tuned for data consistency and reliability.
>
> Some of the changes that you can try
> 1. use gfapi - however this will not provide you HA if the server used to access the gluster volume is down. (the backup-volfile-servers are not used in case of gfapi). You can change this using the engine-config tool for your cluster level.
> 2. Change the remote-dio:enable to turn on client side brick caching. Ensure that you have a power backup in place, so that you don't end up with data loss in case server goes down before data is flushed.
>
> If you're seeing issues with a particular version of glusterfs, please provide gluster profile output for us to help identify bottleneck. (check https://docs.gluster.org/en/latest/Administrator%20Guide/Monitoring%20Wor... on how to do this)
>
> On Fri, Feb 22, 2019 at 1:39 PM Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>>
>> Adding Sahina
>>
>> Il giorno ven 22 feb 2019 alle ore 06:51 Strahil <hunter86_bg(a)yahoo.com> ha scritto:
>>>
>>> I have done some testing and it seems that storhaug + ctdb + nfs-ganesha is showing decent performance in a 3 node hyperconverged setup.
>>> Fuse mounts are hitting some kind of limit when mounting gluster -3.12.15 volumes.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7U5J4KYDJJS...
>>
>>
>>
>> --
>>
>> SANDRO BONAZZOLA
>>
>> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>>
>> Red Hat EMEA
>>
>> sbonazzo(a)redhat.com
5 years, 10 months
HC : JBOD or RAID5/6 for NVME SSD drives?
by Guillaume Pavese
Hi,
We have been evaluating oVirt HyperConverged for 9 month now with a test
cluster of 3 DELL Hosts with Hardware RAID5 on PERC card.
We were not impressed with the performance...
No SSD for LV Cache on these hosts but I tried anyway with LV Cache on a
ram device. Perf were almost unchanged.
It seems that LV Cache is its own source of bugs and problems anyway, so we
are thinking going for full NVME drives when buying the production cluster.
What would the recommandation be in that case, JBOD or RAID?
Thanks
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 10 months
Re: HostedEngine Unreachable
by Simone Tiraboschi
On Mon, Feb 25, 2019 at 2:53 PM Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
> Hi Simone,
>
> The name resolution is working fine.
>
> Running:
> getent ahosts engine.sanren.ac.za
>
> gives the same output:
>
> 192.168.x.x STREAM engine.sanren.ac.za
> 192.168.x.x DGRAM
> 192.168.x.x RAW
> The IP address being the engine IP Address.
>
> The hosts are on the same subnet as the HostedEngine. I can ping the hosts
> from each other. There is a NATting device, that NATs the private IP
> addresses to public IP addresses. From the hosts I can reach the internet,
> but I cant ping from outside (security measure). We also using this device
> to VPN to the cluster
>
> *[root@gohan ~]# virsh -r list*
> * Id Name State*
> *----------------------------------------------------*
> * 13 HostedEngine running*
>
> *[root@gohan ~]# virsh -r dumpxml HostedEngine | grep -i tlsPort
> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes'
> listen='192.168.x.x' passwdValidTo='1970-01-01T00:00:01'>*
>
> *[root@gohan ~]# virsh -r vncdisplay HostedEngine*
> *192.168.x.x:0*
>
> *[root@gohan ~]# hosted-engine --add-console-password*
> *Enter password:*
>
> *You can now connect the hosted-engine VM with VNC at 192.168.x.x:5900*
>
> VNCing with the above gives me the blank black screen and a cursor (See
> attached). I cant do anything on that screen.
>
OK, I can suggest to to power off the engine VM with hosted-engine
--vm-poweroff and then start it again with hosted-engine --vm-start in
order to follow the whole boot process over VNC
>
>
> Please help
>
>
>
> On Mon, Feb 25, 2019 at 11:34 AM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Mon, Feb 25, 2019 at 10:27 AM Sakhi Hadebe <sakhi(a)sanren.ac.za> wrote:
>>
>>> Hi Simone,
>>>
>>> Thank you for your response. Executing the command below, gives this:
>>>
>>> [root@ovirt-host]# curl http://$(grep fqdn /etc/ovirt-hosted-engine/hosted-engine.conf
>>> | cut -d= -f2)/ovirt-engine/services/health
>>> curl: (7) Failed to connect to engine.sanren.ac.za:80; No route to host
>>>
>>> I tried to enable http traffic on the ovirt-host, but the error persists
>>>
>>
>> Run, on your hosts,
>> getent ahosts engine.sanren.ac.za
>>
>> and ensure that it got resolved as you wish.
>> Fix name resolution and routing on your hosts in a coherent manner.
>>
>>
>
> --
> Regards,
> Sakhi Hadebe
>
> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR
>
> Tel: +27 12 841 2308 <+27128414213>
> Fax: +27 12 841 4223 <+27128414223>
> Cell: +27 71 331 9622 <+27823034657>
> Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
>
>
5 years, 10 months
[oVirt 4.3.1 Test Day] cmdline HE Deployment
by Guillaume Pavese
He deployment with "hosted-engine --deploy" fails at TASK
[ovirt.hosted_engine_setup : Get local VM IP]
See following Error :
2019-02-25 12:46:50,154+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:109 TASK [ovirt.hosted_engine_setup : Get
local VM IP]
2019-02-25 12:55:26,823+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {u'_ansible_parsed': True,
u'stderr_lines': [], u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00
:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", u'end': u'2019-02-25
12:55:26.666925', u'_ansible_no_log': False, u'stdout': u'', u'changed':
True, u'invocation': {u'module_args': {u'warn': True, u'executable':
None, u'_uses_shell': True, u'_raw_params': u"virsh -r net-dhcp-leases
default | grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'",
u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'std
in': None}}, u'start': u'2019-02-25 12:55:26.584686', u'attempts': 50,
u'stderr': u'', u'rc': 0, u'delta': u'0:00:00.082239', u'stdout_lines': []}
2019-02-25 12:55:26,924+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:107 fatal: [localhost]: FAILED! =>
{"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default
| grep -i 00:16:3e:3e:fe:28 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
"0:00:00.082239", "end": "2019-02-25 12:55:26.666925", "rc": 0, "start":
"2019-02-25 12:55:26.584686", "stderr": "", "stderr_lines": [], "stdout":
"", "stdout_lines": []}
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 10 months
Gluster - performance.strict-o-direct and other performance tuning in different storage backends
by Leo David
Hello Everyone,
As per some previous posts, this "performance.strict-o-direct=on" setting
caused trouble or poor vm iops. I've noticed that this option is still
part of default setup or automatically configured with
"Optimize for virt. store" button.
In the end... is this setting a good or a bad practice for setting the vm
storage volume ?
Does it depends ( like maybe other gluster performance options ) on the
storage backend:
- raid type / jbod
- raid controller cache size
I am usually using jbod disks attached to lsi hba card ( no cache ). Any
gluster recommendations regarding this setup ?
Is there any documentation for best practices on configurating ovirt's
gluster for different types of storage backends ?
Thank you very much !
Have a great week,
Leo
--
Best regards, Leo David
5 years, 10 months
Problem deploying self-hosted engine on ovirt 4.3.0
by matteo fedeli
after several attempts I managed to install and deploying the ovel gluster but during the self-hosted engine deploy stuck on [INFO] TASK ["oVirt.engine-setup: Install oVirt Engine package"] for 30 minutes ... How to safely interrupt ? I let it go on while I look at some logs?
5 years, 10 months
[oVirt 4.3.1 Test Day] Cockpit HE Deployment
by Guillaume Pavese
As indicated on Trello,
HE deployment though cockpit is stuck at the beginning with "Please correct
errors before moving to next step", but no Error is explicitly shown or
highlighted.
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 10 months
Unable to change cluster and data center compatibility version
by Jonathan Mathews
Good Day
I have been trying to upgrade a clients oVirt from 3.6 to 4.0 but have run
into an issue where I am unable to change the cluster and data center
compatibility version.
I get the following error in the GUI:
Ovirt: Some of the hosts still use legacy protocol which is not supported
by cluster 3.6 or higher. In order to change it a host needs to be put to
maintenance and edited in advanced options section.
This error was received with all VM's off and all hosts in maintenance.
The environment has the following currently installed:
Engine - CentOS 7.4 - Ovirt Engine 3.6.7.5
Host1 - CentOS 6.9 - VDSM 4.16.30
Host2 - CentOS 6.9 - VDSM 4.16.30
Host3 - CentOS 6.9 - VDSM 4.16.30
I also have the following from engine.log
[root@ovengine ~]# tail -f /var/log/ovirt-engine/engine.log
2018-09-22 07:11:33,920 INFO
[org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher]
(DefaultQuartzScheduler_Worker-93) [7533985f] Fetched 0 VMs from VDS
'd82a026c-31b4-4efc-8567-c4a6bdcaa826'
2018-09-22 07:11:34,685 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStoragePoolVDSCommand]
(DefaultQuartzScheduler_Worker-99) [4b7e3710] FINISH,
DisconnectStoragePoolVDSCommand, log id: 1ae6f0a9
2018-09-22 07:11:34,687 INFO
[org.ovirt.engine.core.bll.storage.DisconnectHostFromStoragePoolServersCommand]
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] Running command:
DisconnectHostFromStoragePoolServersCommand internal: true. Entities
affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool
2018-09-22 07:11:34,706 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] START,
DisconnectStorageServerVDSCommand(HostName = ovhost3,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='d82a026c-31b4-4efc-8567-c4a6bdcaa826',
storagePoolId='5849b030-626e-47cb-ad90-3ce782d831b3', storageType='NFS',
connectionList='[StorageServerConnections:{id='3fdffb4c-250b-4a4e-b914-e0da1243550e',
connection='172.16.0.10:/raid0/data/_NAS_NFS_Exports_/STORAGE1',
iqn='null', vfsType='null', mountOptions='null', nfsVersion='null',
nfsRetrans='null', nfsTimeo='null', iface='null', netIfaceName='null'},
StorageServerConnections:{id='4d95c8ca-435a-4e44-86a5-bc7f3a0cd606',
connection='172.16.0.20:/data/ov-export', iqn='null', vfsType='null',
mountOptions='null', nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'},
StorageServerConnections:{id='82ecbc89-bdf3-4597-9a93-b16f3a6ac117',
connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/4TB', iqn='null',
vfsType='null', mountOptions='null', nfsVersion='null', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'},
StorageServerConnections:{id='29bb3394-fb61-41c0-bb5a-1fa693ec2fe2',
connection='172.16.0.11:/raid1/data/_NAS_NFS_Exports_/iso', iqn='null',
vfsType='null', mountOptions='null', nfsVersion='V3', nfsRetrans='null',
nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 48c5ffd6
2018-09-22 07:11:34,991 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(DefaultQuartzScheduler_Worker-99) [2a6aa6f6] FINISH,
DisconnectStorageServerVDSCommand, return:
{3fdffb4c-250b-4a4e-b914-e0da1243550e=0,
29bb3394-fb61-41c0-bb5a-1fa693ec2fe2=0,
82ecbc89-bdf3-4597-9a93-b16f3a6ac117=0,
4d95c8ca-435a-4e44-86a5-bc7f3a0cd606=0}, log id: 48c5ffd6
2018-09-22 07:11:56,367 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-29)
[1a31cc53] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:12:41,017 WARN
[org.ovirt.engine.core.bll.storage.UpdateStoragePoolCommand] (default
task-29) [efd285b] CanDoAction of action 'UpdateStoragePool' failed for
user admin@internal. Reasons:
VAR__TYPE__STORAGE__POOL,VAR__ACTION__UPDATE,$ClustersList
Default,ERROR_CANNOT_UPDATE_STORAGE_POOL_COMPATIBILITY_VERSION_BIGGER_THAN_CLUSTERS
2018-09-22 07:13:15,717 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-6)
[4c9f3ee8] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:15:21,460 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-28)
[649bae65] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:18:44,633 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-8)
[23167cfd] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
2018-09-22 07:24:20,372 WARN
[org.ovirt.engine.core.bll.UpdateVdsGroupCommand] (default task-15)
[5d2ce633] CanDoAction of action 'UpdateVdsGroup' failed for user
admin@internal. Reasons:
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,ACTION_TYPE_FAILED_WRONG_PROTOCOL_FOR_CLUSTER_VERSION
I would appreciate some advice in resolving this.
Thanks
5 years, 10 months
Problem with snapshots in illegal status
by Bruno Rodriguez
Hello,
We are experiencing some problems with some snapshots in illegal status
generated with the python API. I think I'm not the only one, and that is
not a relief but I hope someone can help about it.
I'm a bit scared because, for what I see, the creation date in the engine
for every snapshot is way different from the date when it was really
created. The name of the snapshot is in the format
backup_snapshot_YYYYMMDD-HHMMSS, but as you can see in the following
examples, the stored date is totally random...
Size
Creation Date
Snapshot Description
Status
Disk Snapshot ID
33 GiB
Mar 2, 2018, 5:03:57 PM
backup_snapshot_20190217-011645
Illegal
5734df23-de67-41a8-88a1-423cecfe7260
33 GiB
May 8, 2018, 10:02:56 AM
backup_snapshot_20190216-013047
Illegal
f649d9c1-563e-49d4-9fad-6bc94abc279b
10 GiB
Feb 21, 2018, 11:10:17 AM
backup_snapshot_20190217-010004
Illegal
2929df28-eae8-4f27-afee-a984fe0b07e7
43 GiB
Feb 2, 2018, 12:55:51 PM
backup_snapshot_20190216-015544
Illegal
4bd4360e-e0f4-4629-ab38-2f0d80d3ae0f
11 GiB
Feb 13, 2018, 12:51:08 PM
backup_snapshot_20190217-010541
Illegal
fbaff53b-30ce-4b20-8f10-80e70becb48c
11 GiB
Feb 13, 2018, 4:05:39 PM
backup_snapshot_20190217-011207
Illegal
c628386a-da6c-4a0d-ae7d-3e6ecda27d6d
11 GiB
Feb 13, 2018, 4:38:25 PM
backup_snapshot_20190216-012058
Illegal
e9ddaa5c-007d-49e6-8384-efefebb00aa6
11 GiB
Feb 13, 2018, 10:52:09 AM
backup_snapshot_20190216-012550
Illegal
5b6db52a-bfe3-45f8-b7bb-d878c4e63cb4
55 GiB
Jan 22, 2018, 5:02:29 PM
backup_snapshot_20190217-012659
Illegal
7efe2e7e-ca24-4b27-b512-b42795c79ea4
When I'm getting the logs for the first one, to check what happened to it,
I get the following
2019-02-17 01:16:45,839+01 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default
task-100) [96944daa-c90a-4ad7-a556-c98e66550f87] START,
CreateVolumeVDSCommand(
CreateVolumeVDSCommandParameters:{storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
ignoreFailoverLimit='false',
storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
imageSizeInBytes='32212254720', volumeFormat='COW',
newImageId='fa154782-0dbb-45b5-ba62-d6937259f097', imageType='Sparse',
newImageDescription='', imageInitialSizeInBytes='0',
imageId='5734df23-de67-41a8-88a1-423cecfe7260',
sourceImageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f'}), log id:
497c168a
2019-02-17 01:18:26,506+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand]
(default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] START,
GetVolumeInfoVDSCommand(HostName = hood13.pic.es,
GetVolumeInfoVDSCommandParameters:{hostId='0a774472-5737-4ea2-b49a-6f0ea4572199',
storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
imageId='5734df23-de67-41a8-88a1-423cecfe7260'}), log id: 111a34cf
2019-02-17 01:18:26,764+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] Successfully
added Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260') for image transfer command
'11104d8c-2a9b-4924-96ce-42ef66725616'
2019-02-17 01:18:27,310+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand]
(default task-212) [19f00d3e-5159-48aa-b3a0-615a085b62d9] START,
AddImageTicketVDSCommand(HostName = hood11.pic.es,
AddImageTicketVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e',
ticketId='0d389c3e-5ea5-4886-8ea7-60a1560e3b2d', timeout='300',
operations='[read]', size='35433480192',
url='file:///rhev/data-center/mnt/blockSD/e655abce-c5e8-44f3-8d50-9fd76edf05cb/images/c5cc464e-eb71-4edf-a780-60180c592a6f/5734df23-de67-41a8-88a1-423cecfe7260',
filename='null'}), log id: f5de141
2019-02-17 01:22:28,898+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-100)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Renewing transfer ticket for
Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:26:33,588+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-100)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Renewing transfer ticket for
Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:26:49,319+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-6)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Finalizing successful transfer for
Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:27:17,771+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-6)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM hood11.pic.es command
TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General
Storage Exception: ("5 [] [\' Logical volume
e655abce-c5e8-44f3-8d50-9fd76edf05cb/5734df23-de67-41a8-88a1-423cecfe7260
in use.\', \' Logical volume
e655abce-c5e8-44f3-8d50-9fd76edf05cb/fa154782-0dbb-45b5-ba62-d6937259f097
in
use.\']\\ne655abce-c5e8-44f3-8d50-9fd76edf05cb/[\'5734df23-de67-41a8-88a1-423cecfe7260\',
\'fa154782-0dbb-45b5-ba62-d6937259f097\']",)',)
2019-02-17 01:27:17,772+01 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-6)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Command
'TeardownImageVDSCommand(HostName = hood11.pic.es,
ImageActionsVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e'})'
execution failed: VDSGenericException: VDSErrorException: Failed in
vdscommand to TeardownImageVDS, error = Cannot deactivate Logical Volume:
('General Storage Exception: ("5 [] [\' Logical volume
e655abce-c5e8-44f3-8d50-9fd76edf05cb/5734df23-de67-41a8-88a1-423cecfe7260
in use.\', \' Logical volume
e655abce-c5e8-44f3-8d50-9fd76edf05cb/fa154782-0dbb-45b5-ba62-d6937259f097
in
use.\']\\ne655abce-c5e8-44f3-8d50-9fd76edf05cb/[\'5734df23-de67-41a8-88a1-423cecfe7260\',
\'fa154782-0dbb-45b5-ba62-d6937259f097\']",)',)
2019-02-17 01:27:19,204+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MergeVDSCommand]
(EE-ManagedThreadFactory-commandCoordinator-Thread-6)
[e73bfbec-d6bd-40e5-8009-97e6005ad1d4] START, MergeVDSCommand(HostName =
hood11.pic.es,
MergeVDSCommandParameters:{hostId='79cbda85-35f4-44df-b309-01b57bc2477e',
vmId='addd5eba-9078-41aa-9244-fe485aded951',
storagePoolId='fa64792e-73b3-4da2-9d0b-f334422aaccf',
storageDomainId='e655abce-c5e8-44f3-8d50-9fd76edf05cb',
imageGroupId='c5cc464e-eb71-4edf-a780-60180c592a6f',
imageId='fa154782-0dbb-45b5-ba62-d6937259f097',
baseImageId='5734df23-de67-41a8-88a1-423cecfe7260',
topImageId='fa154782-0dbb-45b5-ba62-d6937259f097', bandwidth='0'}), log id:
1d956d48
2019-02-17 01:27:29,923+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-49)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Transfer was
successful. Download disk 'vm.example.com_Disk1' (id
'5734df23-de67-41a8-88a1-423cecfe7260')
2019-02-17 01:27:30,997+01 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-46)
[19f00d3e-5159-48aa-b3a0-615a085b62d9] Successfully
transferred disk '5734df23-de67-41a8-88a1-423cecfe7260' (command id
'11104d8c-2a9b-4924-96ce-42ef66725616')
For what I understand, there was a "Cannot deactivate logical volume"
problem that I would thank very much if someone can help me about how to
solve
Thanks again
--
Bruno Rodríguez Rodríguez
"Si algo me ha enseñado el tetris, es que los errores se acumulan y los
triunfos desaparecen"
5 years, 10 months
VM creation using python api specifying an appropriate timezone
by ckennedy288@gmail.com
Hi, I'm looking for a way to accurately specify the timezone of a VM at VM creation time.
At the moment all VMs that get created using a simple in-house python script are set with a GMT timezone e.g.:
if ( "windows" in OS ):
vm_tz="GMT Standard Time"
else:
vm_tz="Etc/GMT"
Is there a better way to choose a more appropriate timezone based on the VMs global location?
is there a way to get a list of ovirt supported time zones for Windows and Linux and compare that with the VMs location and OS type and then choose an appropriate timezone?
5 years, 10 months