Re: Super Low VM disk IO via Shared Storage
by Strahil
Why don't you try with 4096 ?
Most block devices have a blcok size of 4096 and anything bellow is slowing them down.
Best Regards,
Strahil NikolovOn Sep 24, 2019 17:40, Amit Bawer <abawer(a)redhat.com> wrote:
>
> have you reproduced performance issue when checking this directly with the shared storage mount, outside the VMs?
>
> On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko <M.Vrgotic(a)activevideo.com> wrote:
>>
>> Dear oVirt,
>>
>>
>>
>> I have executed some tests regarding IO disk speed on the VMs, running on shared storage and local storage in oVirt.
>>
>>
>>
>> Results of the tests on local storage domains:
>>
>> avlocal2:
>>
>> [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=100000 oflag=dsync
>>
>> 100000+0 records in
>>
>> 100000+0 records out
>>
>> 51200000 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>>
>>
>>
>> avlocal3:
>>
>> [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=100000 oflag=dsync
>>
>> 100000+0 records in
>>
>> 100000+0 records out
>>
>> 51200000 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>>
>>
>>
>> Results of the test on shared storage domain:
>>
>> avshared:
>>
>> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 count=100000 oflag=dsync
>>
>> 100000+0 records in
>>
>> 100000+0 records out
>>
>> 51200000 bytes (51 MB) copied, 283.499 s, 181 kB/s
>>
>>
>>
>> Why is it so low? Is there anything I can do to tune, configure VDSM or other service to speed this up?
>>
>> Any advice is appreciated.
>>
>>
>>
>> Shared storage is based on Netapp with 20Gbps LACP path from Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is NFS4.0.
>>
>> oVirt is 4.3.4.3 SHE.
>>
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: <
5 years, 1 month
Incremental backup using ovirt api
by smidhunraj@gmail.com
Hi,
I tried to take incremental backup of a vm using this script.
public function downloadDiskIncremental(){
$data=array();
$xmlStr = "<disk_attachment>
<disk id='1371afdd-91d4-4bc7-9792-0efbf6bbd1c9'>
<backup>incremental</backup>
</disk>
</disk_attachment>";
$curlParam=array(
"url" => "vms/4044e014-7e20-4dbc-abe5-64690ec45f63/diskattachments",
"method" => "POST",
"data" =>$xmlStr,
);
}
But it is throwing me error as
Array ( [status] => error [message] => For correct usage, see: https://ovirt.bobcares.com/ovirt-engine/api/v4/model#services/disk-attach...
Please help me with this issue....
5 years, 1 month
Re: [ANN] oVirt 4.3.6 is now generally available
by Strahil
I have upgraded from engine and after reboot the engine knows that I have no more updates pending.
This is my flow:
0.Engine upgrade
1. Host is moved in maintenance (from UI) - no ticks for stopping gluster. Automatically evacuates all VMs.
2. Use UI to check for updates
3. Update host (keep tick for autoreboot)
4. Wait for hostto come up and general system & gluster healthcheck
Best Regards,
Strahil Nikolov
5 years, 1 month
Re: Migrate running hosts to new Engine
by Strahil
I would go with fixing your current engine that trying to migrate without downtime.
Have you tries to create a backup and then shutdown current engine and restore from backup ?
P.S.: Before doing anything, try to find the xml configuration of each VM in the past vdsm configs.As that data is only in the engine, you won't be able to power up any machine without that configuration.
Best Regards,
Strahil NikolovOn Sep 27, 2019 16:11, Magnus Isaksson <magnus(a)vmar.se> wrote:
>
> Hello all,
>
> I need to migrate 4 hosts running CentOS with oVirt 4.2 to a new Engine that runs 4.3 and not disrupt the VMs on these hosts? The VMs need to be running, i have a slim option of downtime for them, but rather not.
>
> The reason is that my current engine 4.2 will not upgrade to 4.3, tried so many times, but all i get is a white page after the upgrade, according to engine-setup, was a success.
>
> So, how can i do this?
>
> Reagrds
> Magnus Isaksson
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7WLV5VF6BPD...
5 years, 1 month
Re: NFS based - incremental backups
by Strahil
Geo replication requires master and slave volumes , but I'm not sure if the slave volumeshould also be replica 3.
Maybe in your case it coould be replica 1 or something lile that.
In KVM I would create a snapshot, back it up and then merge all disks back to one (a.k.a. block commit).
So a simple script with user for the API, can allow you to do that without paying anything extra to a 3rd party.
Best Regards,
Strahil Nikolov
On Sep 27, 2019 08:25, Leo David <leoalex(a)gmail.com> wrote:
>
> Hello Everyone,
> Ive been struggling since a while to find out a propper solution to have a full backup scenario for a production environment.
> In the past, we have used Proxmox, and the scheduled incremenral nfs based full vm backups is a thing that we really miss.
> As far as i know, at this point the only way to have a backup in oVirt / rhev is by using gluster geo-replication feature.
> This is nice, but as far as i know it lacks some important features:
> - ability to have incremental backups to restore vms from
> - ability to backup vms placed on different storage domains ( only one storage domain can be geo-replicated !!! some vms have disks on ssd volume, some on hdd, some on both)
> - need to setup an external 3 nodes gluster cluster ( although a workaround would be to have single bricks based volumes for a single instance )
> I know we can create snaps, but they will die with the platform in a fail scenario, and neither they can be scheduled.
> We have found bacchus project that looked promising, although it had a pretty hassled way to achieve backups ( create snap, create vm from the snap, export vm to export domain, delete vm, delete snap - all in a scheduled fashion )
> As a mention, Proxmox is incrementaly creating a tar archive of the vm disk content, and places it to an external network storage like nfs. This allowed us to backup/reatore both linux and windows vms very easily.
> Now, I know this have been discussed before, but i would like to know if there are at least any future plans to implement this feature in the next releases.
> Personally, i consider this a major, and quite decent feature to have with the platform, without the need to pay for 3rd party solutions that may or may not achieve the goal while adding extra pieces to the stack. Geo-replication is a good and nice feature, but in my oppinion it is not what a "backup domain" would be.
> Have a nice day,
>
> Leo
>
>
5 years, 1 month
engine-iso-uploader error
by Fabrice SOLER
Hello,
I use engine-iso-uploader with ssh (and not NFS). It was working but now
we have an error.
I have tried two commands :
engine-iso-uploader -v --iso-domain=lp-ducharmoy-iso upload
eole-2.6.2.1-alternate-amd64.iso
and
engine-iso-uploader --iso-domain=lp-ducharmoy-iso list
*and the error is :*
ERROR: Fault reason is "Operation Failed". Fault detail is "Can't find
storage server connection for id
'7779c519-9f79-4339-81e0-f89f5e9f08b2'.". HTTP response code is 500.
INFO: Use the -h option to see usage.
I have read the /var/log/httpd/ssl_access_log , so we can read that API
is not accessible :
192.168.83.83 - - [27/Sep/2019:12:01:06 -0400] "POST
/ovirt-engine/sso/oauth/token HTTP/1.1" 200 310
192.168.83.83 - - [27/Sep/2019:12:01:07 -0400] "POST
/ovirt-engine/sso/oauth/token-info HTTP/1.1" 200 323
192.168.83.83 - - [27/Sep/2019:12:01:07 -0400] "GET /ovirt-engine/api
HTTP/1.1" 200 975
192.168.83.83 - - [27/Sep/2019:12:01:07 -0400] "GET
/ovirt-engine/api/storagedomains HTTP/1.1" *500* 188
I have changed the admin password, do you think it could be come from this ?
With verbose :
[root@infra-eple iso]# engine-iso-uploader -v
--iso-domain=lp-ducharmoy-iso list
Please provide the REST API password for the admin@internal oVirt Engine
user (CTRL+D to abort):
DEBUG: API Vendor(None) API Version(4.2.0)
ERROR: Fault reason is "Operation Failed". Fault detail is "Can't find
storage server connection for id
'7779c519-9f79-4339-81e0-f89f5e9f08b2'.". HTTP response code is 500.
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: list
DEBUG: Traceback (most recent call last):
DEBUG: File "/usr/bin/engine-iso-uploader", line 1521, in <module>
DEBUG: isoup = ISOUploader(conf)
DEBUG: File "/usr/bin/engine-iso-uploader", line 448, in __init__
DEBUG: self.list_all_ISO_storage_domains()
DEBUG: File "/usr/bin/engine-iso-uploader", line 559, in
list_all_ISO_storage_domains
DEBUG: domainAry = svc.storage_domains_service().list()
DEBUG: File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 21936,
in list
DEBUG: return self._internal_get(headers, query, wait)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 211, in _internal_get
DEBUG: return future.wait() if wait else future
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 55, in wait
DEBUG: return self._code(response)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 208, in callback
DEBUG: self._check_fault(response)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 132, in _check_fault
DEBUG: self._raise_error(response, body)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 118, in _raise_error
DEBUG: raise error
DEBUG: Error: Fault reason is "Operation Failed". Fault detail is "Can't
find storage server connection for id
'7779c519-9f79-4339-81e0-f89f5e9f08b2'.". HTTP response code is 500.
Sincerely,
Fabrice SOLER
--
5 years, 1 month
engine-iso-uploader error
by Fabrice SOLER
Hello,
We use engine-iso-uploader with ssh (and not NFS). It was working but
now we have an error.
We tried two commands :
engine-iso-uploader -v --iso-domain=lp-ducharmoy-iso upload
eole-2.6.2.1-alternate-amd64.iso
and
engine-iso-uploader --iso-domain=lp-ducharmoy-iso list
*and the error is :*
ERROR: Fault reason is "Operation Failed". Fault detail is "Can't find
storage server connection for id
'7779c519-9f79-4339-81e0-f89f5e9f08b2'.". HTTP response code is 500.
INFO: Use the -h option to see usage.
Do you know where it could come from ?
With verbose :
[root@infra-eple iso]# engine-iso-uploader -v
--iso-domain=lp-ducharmoy-iso list
Please provide the REST API password for the admin@internal oVirt Engine
user (CTRL+D to abort):
DEBUG: API Vendor(None) API Version(4.2.0)
ERROR: Fault reason is "Operation Failed". Fault detail is "Can't find
storage server connection for id
'7779c519-9f79-4339-81e0-f89f5e9f08b2'.". HTTP response code is 500.
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: list
DEBUG: Traceback (most recent call last):
DEBUG: File "/usr/bin/engine-iso-uploader", line 1521, in <module>
DEBUG: isoup = ISOUploader(conf)
DEBUG: File "/usr/bin/engine-iso-uploader", line 448, in __init__
DEBUG: self.list_all_ISO_storage_domains()
DEBUG: File "/usr/bin/engine-iso-uploader", line 559, in
list_all_ISO_storage_domains
DEBUG: domainAry = svc.storage_domains_service().list()
DEBUG: File
"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 21936,
in list
DEBUG: return self._internal_get(headers, query, wait)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 211, in _internal_get
DEBUG: return future.wait() if wait else future
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 55, in wait
DEBUG: return self._code(response)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 208, in callback
DEBUG: self._check_fault(response)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 132, in _check_fault
DEBUG: self._raise_error(response, body)
DEBUG: File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py",
line 118, in _raise_error
DEBUG: raise error
DEBUG: Error: Fault reason is "Operation Failed". Fault detail is "Can't
find storage server connection for id
'7779c519-9f79-4339-81e0-f89f5e9f08b2'.". HTTP response code is 500.
Sincerely,
Fabrice SOLER
5 years, 1 month
Migrate running hosts to new Engine
by Magnus Isaksson
Hello all,
I need to migrate 4 hosts running CentOS with oVirt 4.2 to a new Engine that runs 4.3 and not disrupt the VMs on these hosts? The VMs need to be running, i have a slim option of downtime for them, but rather not.
The reason is that my current engine 4.2 will not upgrade to 4.3, tried so many times, but all i get is a white page after the upgrade, according to engine-setup, was a success.
So, how can i do this?
Reagrds
Magnus Isaksson
5 years, 1 month