Re: ConnectStoragePoolVDS failed
by Strahil
There were some issues with the migration.
Check that all files/directories are owned by vdsm:kvm .
Best Regards,
Strahil Nikolov
5 years, 10 months
qemu-img info showed iscsi/FC lun size 0
by jingjie.jiang@oracle.com
Hi,
Based on oVirt 4.3.0, I have data domain from FC lun, then I create new vm on the disk from FC data domain.
After VM was created. According to qemu-img info, the disk size is 0.
# qemu-img info /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
image: /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 0
I tried on iscsi and same result.
Is the behaviour expected?
Thanks,
Jingjie
5 years, 10 months
ovirt and blk-mq
by Fabrice Bacchella
When checking block device configuration, on an ovirt configuration using a SAN, I found this line:
dm/use_blk_mq:0
Did someone try it, by adding in the kernel command line:
dm_mod.use_blk_mq=y
I'm not sure, but it might improve performance on multipath, even on spinning rust.
5 years, 10 months
oVirt 4.3.2 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.2 Second Release Candidate, as of March 13th, 2019.
This update is a release candidate of the second in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Guest Tools with new oVirt Windows Guest Agent is available
- oVirt Appliance is already available
- oVirt Node is already available [2]
Additional Resources:
* Read more about the oVirt 4.3.2 release highlights:
http://www.ovirt.org/release/4.3.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.2/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 10 months
Ovirt 4.3.1 cannto set host to maintenance
by Strahil Nikolov
Hi Community,
I have tried to download my OVF_STORE images that were damaged on the shared storage , but it Failed.In result , I cannot set any host into maintenance via the UI.I have found out this bug 1586126 – After upgrade to RHV 4.2.3, hosts can no longer be set into maintenance mode. , but the case is different as mine has failed and should not block that operation.
|
|
| |
1586126 – After upgrade to RHV 4.2.3, hosts can no longer be set into ma...
|
|
|
Here are 2 screenshots:Imgur
|
|
|
| | |
|
|
|
| |
Imgur
Imgur
Post with 0 votes and 1 views.
|
|
|
Imgur
|
|
|
| | |
|
|
|
| |
Imgur
Imgur
Post with 0 votes and 0 views.
|
|
|
How can I recover from that situation?
Best Regards,Strahil Nikolov
5 years, 10 months
iSCSI domain creation ; nothing happens
by Guillaume Pavese
My setup : oVirt 4.3.1 HC on Centos 7.6, everything up2date
I try to create a new iSCSI Domain. It's a new LUN/Target created on
synology bay, no CHAP (I tried with CHAP too but that does not help)
I first entered the syno's address and clicked discover
I saw the existing Targets ; I clicked on the arrow on the right. I then
get the following Error :
"Error while executing action: Failed to setup iSCSI subsystem"
In hosts logs, I get
conn 0 login rejected: initiator error (02/00)
Connection1:0 to [target:
iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a, portal:
10.199.9.16,3260] through [iface: default] is shutdown.
In engine logs, I get :
2019-03-12 14:33:35,504+01 INFO
[org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] Running command:
ConnectStorageToVdsCommand
internal: false. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2019-03-12 14:33:35,511+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] START,
ConnectStorageServerVDSCommand(Host
Name = ps-inf-int-kvm-fr-305-210.hostics.fr,
StorageServerConnectionManagementVDSParameters:{hostId='6958c4f7-3716-40e4-859a-bfce2f6dbdba',
storagePoolId='00000000-0000-0000-0000-000000000000', storageType='
ISCSI', connectionList='[StorageServerConnections:{id='null',
connection='10.199.9.16',
iqn='iqn.2000-01.com.synology:SVC-STO-FR-301.Target-2.2dfed4a32a',
vfsType='null', mountOptions='null', nfsVersion='nul
l', nfsRetrans='null', nfsTimeo='null', iface='null',
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 7f36d8a9
2019-03-12 14:33:36,302+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] FINISH,
ConnectStorageServerVDSCommand, re
turn: {00000000-0000-0000-0000-000000000000=465}, log id: 7f36d8a9
2019-03-12 14:33:36,310+01 ERROR
[org.ovirt.engine.core.bll.storage.connection.ISCSIStorageHelper] (default
task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] The connection with details
'00000000-0000-0000-0000-000000000000' failed because of error code '465'
and error message is: failed to setup iscsi subsystem
2019-03-12 14:33:36,315+01 ERROR
[org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand]
(default task-24) [85d27833-b5d5-4bc8-b43c-88d980c30333] Transaction
rolled-back for command
'org.ovirt.engine.core.bll.storage.connection.ConnectStorageToVdsCommand'.
2019-03-12 14:33:36,676+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
(default task-24) [70251a16-0049-4d90-a67c-653b229f7639] START,
GetDeviceListVDSCommand(HostName = ps-inf-int-kvm-fr-305-210.hostics.fr,
GetDeviceListVDSCommandParameters:{hostId='6958c4f7-3716-40e4-859a-bfce2f6dbdba',
storageType='ISCSI', checkStatus='false', lunIds='null'}), log id: 539fb345
2019-03-12 14:33:36,995+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
(default task-24) [70251a16-0049-4d90-a67c-653b229f7639] FINISH,
GetDeviceListVDSCommand, return: [], log id: 539fb345
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
5 years, 10 months
oVirt Performance (Horrific)
by drew.rash@gmail.com
Hi everybody, my coworker and I have some decent hardware that would make great single servers, and then we through in a 10gb switch with 2x10gbps cards in 4 boxes.
We have 2 excellent boxes - super micro board (disabled speed step) with 14 core intel i9's 7940x's, 128 GB ram (3200), 1tb m.2 samsung 870 evo's, 1tb ssd samsung, 1 8TB WD Gold, 1 6TB WD gold.
Then we have 2 boxes (1 with 8 core i7-9700k, 1 with 6 core i7-8700) 128GB ram in one, 64GB ram in the other all 3000MHz with the same 1tb ssd,6tb wd gold, 8tb wd gold drives as the other boxes and 10gbps cards.
Our problem is performance. We used the slower boxes for KVM(libvirt) and FreeNAS at first which was great performance wise. Then we bought the new super micro boxes and converted to oVirt + Gluster and did some basic write test using dd writing zero's to files from 1GB up to 50GB and were happy with the numbers writing directly to the gluster. But then we stuck a windows VM on it and turned it on...I'll stop there..because turning it on stopped any performance testing. This thing blew goat cheese. It was so slow the oVirt guest agent doesn't even start along with MS SQL server engine sometimes and other errors.
So naturally, we removed the gluster from the equation. We took one of the 8TB WD Gold drives, made it a linux NFS share and gave it to oVirt to put VM's on as an NFS Domain. Just a single drive. Migrated the disk with the fresh windows 10 installation to it configured as VirtIO-SCSI, and booted the vm with 16GB ram, 8:1:1 cpu's. To our surprise it still blew. Just ran a winsat disk -drive c: for example purposes and the spice viewer repeatedly freezing, had the resource monitor open watching the 10,000ms disk response times with results...results were I rebooted because the results disappeared I didn't run it as administrator. And opening a command prompt is painful, the disk is still in use The task manager has no words on it. Disk is writing like 1MBps. command prompt finally showed up and looked blank with the cursor offset with no words anywhere.
So the reboot took .. Well turning off took 2 minutes. Booting took 6 minutes 30 seconds ish. Logging in: 1m+
So 9-10 minutes to reboot and log back in a fresh windows install. Then 2 minutes to open a command prompt, task manager and resource monitor.
During the write test disk i/o on the vm was less than 8, from the graph looks like 6MBps. Network traffic is like 20Mbps average, cpu is near zero, a couple spikes up to 30MBps on the disk. I ran this same thing on my disk and it finished in <1m. Ran it on the vm...still running after 30 minutes. I'll wait for the results to post them here. Ok It's been 30 minutes and it's still writing. I don't see the writes in the resource monitor, windows is doing a bunch of random app updates or something with candy crush on a fresh install and, ok so I hit enter a bunch of times on the prompt and it moved down to a flush-seq...and now it shows up in the resource monitor doing something again...I just ran this on my pc and it finished in less than a minute... whatever it's doing it's almost running at 1MB/s
I think something went wrong because it only shows like 2 minutes passing at any test and then a total of 37 minutes. And at no time did the windows resource graphs or any of the oVirt node system graphs show more than like 6MB/s and definitely not 50 or 1GBps... flat out lies here.
C:\Windows\system32>winsat disk -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-drive c -ran -read'
> Run Time 00:00:12.95
> Running: Storage Assessment '-drive c -seq -read'
> Run Time 00:00:20.59
> Running: Storage Assessment '-drive c -seq -write'
> Run Time 00:02:04.56
> Run Time 00:02:04.56
> Running: Storage Assessment '-drive c -flush -seq'
> Run Time 00:01:02.75
> Running: Storage Assessment '-drive c -flush -ran'
> Run Time 00:01:50.20
> Dshow Video Encode Time 0.00000 s
> Dshow Video Decode Time 0.00000 s
> Media Foundation Decode Time 0.00000 s
> Disk Random 16.0 Read 5.25 MB/s 5.1
> Disk Sequential 64.0 Read 1220.56 MB/s 8.6
> Disk Sequential 64.0 Write 53.61 MB/s 5.5
> Average Read Time with Sequential Writes 22.994 ms 1.9
> Latency: 95th Percentile 85.867 ms 1.9
> Latency: Maximum 325.666 ms 6.5
> Average Read Time with Random Writes 29.548 ms 1.9
> Total Run Time 00:37:35.55
I even ran it again and it had the exact same results. So I'll try copying a file from a 1gbps network location with an ssd to this pc. It's a 4GB CentOS7 ISO to the desktop. It said 20MB/s then up to 90MB/s...then it dropped after doing a couple gigs. 2.25 gigs to go and it's going at 2.7MB/s with some fluctuations up to 5MB. So to drive this home at the same time... I copied the file off the same server to another server with ssd disks and it ran at 100MB/s which is what I'd expect over a 1gbps network.
All this said, we do have an ssd gluster 2+1 arbiter (which seemed fastest when we tested different variations) on the 1TB ssd's. I was able to do reads from the array inside a VM at 550MB/s which is expected for an ssd. We did dd writing zero's and got about 550MB/s also from the ovirt node. But inside a VM the best we get is around ~10MB/s writing.
Basically done the same testing using windows server 2016, boot terrible, opening applications terrible. But with sql server running off the ssd gluster I can read at 550MB/s but writing is horrific somewhere around 2-10MB/s.
Latency between the nodes with ping is 100us ish. The hardware should be able to do 200MB/s HDD's, 550MB/s SSD's but it doesn't. And it's evident in every writing scenario inside a vm. Migrating VM's is also at this speed.
Gluster healing seems to run faster, we've seen it conusme 7-9 Gbps. So I feel this is an oVirt issue and not gluster. Especially since all the tests above are the same when using an NFS mount on the box running the VM in oVirt.
Please guide me. I can post pictures and such if needed, logs whatever. Just ask.
5 years, 10 months
Suggestions on adding 1000 or more networks to an engine/hosts
by Brian Wilson
So we have a use case where our engines will will be hosting development sandbox clusters. We need to have upwards of 1000 networks on some of these.
I understand there is no theoretical limit to the number of networks does any body have a good reliable way of instantiating all of these networks.
We have been using ansible for POCs and doing 100 Networks have not had a problem getting them on there however when upping the number to many more it begins to take longer and longer between each one and eventually timed iout and only got to 1567
Example Task we are using for this:
tasks:
- name: Add More Networks
ovirt_network:
auth: "{{ ovirt_auth }}"
data_center: "{{ pcloud_name }}"
name: "{{ pcloud_name }}-{{ item }}"
state: present
label: uplink
vlan_tag: "{{ item }}"
clusters:
- name: "{{ ovirt_cluster }}"
assigned: yes
required: no
display: no
migration: no
gluster: no
with_sequence: start=1551 end=2500
What are some better ways of bulk additions of networks? Would the API provide a better solution so as to sorta bulk cache them then initiate the save?
5 years, 10 months