Ansible hosted-engine deploy still doesnt support manually defined ovirtmgmt?
by Callum Smith
Dear All,
We're trying to deploy our hosted engine remotely using the ansible hosted engine playbook, which has been a rocky road but we're now at the point where it's installing, and failing. We've got a pre-defined bond/VLAN setup for our interface which has the correct bond0 bond0.123 and ovirtmgmt bridge on top but we're hitting the classic error:
Failed to find a valid in
terface for the management network of host virthyp04.virt.in.bmrc.ox.ac.uk. If the interface ovirtmgmt is a bridge, it should be torn-down manually.
Does this bug still exist in the latest (4.3) version, and is installing using ansible with this network configuration impossible?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
5 years, 7 months
Actual size bigger than virtual size
by suporte@logicworks.pt
Hi,
I have an all in one ovirt 4.2.2 with gluster storage and a couple of windows 2012 VMs.
One w2012 is showing actual size 209GiB and Virtual Size 150 GiB on a thin provision disk. The vm shows 30,9 GB of used space.
This VM is slower than the others mainly when we reboot the machine it takes around 2 hours.
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
5 years, 7 months
Re: oVirt Performance (Horrific)
by Strahil
Hi Drew,
What is the host RAM size and what is the setting for VM.dirty_ratio and background ratio on those hosts?
What about your iSCSI target?
Best Regards,
Strahil Nikolov
On Mar 11, 2019 23:51, Drew Rash <drew.rash(a)gmail.com> wrote:
>
> Added the disable:false, removed the gluster, re-added using nfs. Performance still in the low 10's MBps + or - 5
> Ran the showmount -e "" and it displayed the mount.
>
> Trying right now to re-mount using gluster with a negative-timeout=1 option.
>
> We converted one of our 4 boxes to FreeNAS, took 4 6TB drives and made a raid iSCSI and connected it to oVirt. Boot windows. ( times 2, did 2 boxes with a 7GB file on each) copied from one to the other and it copied at 600MBps average. But then has weird pauses... I think it's doing some kind of cache..it'll go like 2GB and choke to zero Bps. Then speed up and choke, speed up choke averaging or getting up to 10MBps. Then at 99% it waits 15 seconds with 0 bytes left...
> Small files, are instant basically. No complaint there.
> So...WAY faster. But suffers from the same thing....just requires writing some more to get to it. a few gigs and then it crawls.
>
> Seems to be related to if I JUST finished running a test. If I wait a while, I get it it to copy almost 4GB or so before choking.
> I made a 3rd windows 10 VM and copied the same file from the 1st to the 2nd (via a windows share and from the 3rd box) And it didn't choke or do any funny business...oddly. Maybe a fluke. Only did that once.
>
> So....switching to freenas appears to have increased the window size before it runs horribly. But it will still run horrifically if the disk is busy.
>
> And since we're planning on doing actual work on this... idle disks caching up on some hidden cache feature of oVirt isn't gonna work. We won't be writing gigs of data all over the place...but knowing that this chokes a VM to near death...is scary.
>
> It looks like for a windows 10 install to operate correctly, it expects at least 15MB/s with less than 1s latency. Otherwise services don't start and weird stuff happens and it runs slower than my dog while pooping out that extra little stringy bit near the end. So we gotta avoid that.
>
>
>
>
> On Sat, Mar 9, 2019 at 12:44 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hj Drew,
>>
>> For the test change the gluster parameter nfs.disabled to false.
>> Something like gluster volume set volname nfs.dsiable false
>>
>> Then use shownount -e gluster-node-fqdn
>> Note: NFS might not be allowed in the firewall.
>>
>> Then add this NFS domain (don't forget to remove the gluster storage domain before that) and do your tests.
>>
>> If it works well, you will have to switch off nfs.disable and deploy NFS Ganesha:
>>
>> gluster volume reset volname nfs.disable
>>
>> Best Regards,
>> Strahil Nikolov
5 years, 7 months
HE - engine gluster volume - not mounted
by Leo David
Hello Everyone,
Using 4.3.2 installation, and after running through HyperConverged Setup,
at the last stage it fails. It seems that the previously created "engine"
volume is not mounted under "/rhev" path, therefore the setup cannot finish
the deployment.
Any ideea which are the services responsible of mounting the volumes on
oVirt Node distribution ? I'm thinking that maybe this particularly one
failed to start for some reason...
Thank you very much !
--
Best regards, Leo David
5 years, 7 months
Multiple Active VM before the preview" snapshots
by Bruckner, Simone
Hi,
we see some VMs that show an inconsistent view of snapshots. Checking die database for one example vm shows the following result:
engine=# select snapshot_id, status, description from snapshots where vm_id = '40c0f334-dac5-42ad-8040-e2d2193c73c0';
snapshot_id | status | description
--------------------------------------+------------+------------------------------
b77f5752-f1a4-454f-bcde-6afd6897e047 | OK | Active VM
059d262a-6cc4-4d35-b1d4-62ef7fe28d67 | OK | Active VM before the preview
d22e4f74-6521-45d5-8e09-332c05194ec3 | OK | Active VM before the preview
87d39245-bedf-4cf1-a2a6-4228176091d3 | IN_PREVIEW | base
(4 rows)
We cannot perform any snapshot, clone, copy operations on those vms. Is there a way to get this cleared?
All the best,
Simone
5 years, 7 months
Is iSCSI performance better for VMs on the selected Host?
by sebastian.freitag@aptiv.com
Hi all,
I am planning to add block storage (iSCSI) to my oVirt environment. I just stumbled across this point from the oVirt Documentation:
"6. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important: All communication to the storage domain is through the selected host and not directly from the oVirt Engine." (you find it here: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html#adding-...)
So my question is: Is there a significant performance advantage for VMs using the storage that are running on that "selected host"? Because if this is the case, I would connect all my host machines to the dedicated storage network, create multiple data domains and "split" my iSCSI storage server(s) among those domains. If it is not the case, I would only have my storage server(s) and one of my hosts in the storage network. This also has implications on the dimensions (and cost!) of the switch that I use for the storage network, so I would be grateful if someone has any insight.
Thanks
5 years, 8 months
High iowait and low throughput on NFS, oflag=direct fixes it?
by Frank Wall
Hi,
I've been using oVirt for years and have just discovered a rather strange
issue that causes EXTREME high iowait when using a NFSv3 storage.
Here's a quick test on a CentOS 7.6 VM running on any oVirt 4.2.x node:
(5 oVirt nodes, all showing the same results)
# CentOS VM
$ dd if=/dev/zero of=TEST02 bs=1M count=3000
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 141.649 s, 22.2 MB/s
# iostat output
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vdb 0.00 0.00 1.00 50.00 0.00 23.02 924.39 121.62 2243.47 2301.00 2242.32 19.61 100.00
As you can see iowait is beyond bad both for read and write requests.
During this test the underlying NFS storage server was idle, disks barely
doing anything. iowait on the NFS storage server was very low.
However, when using oflag=direct the test shows a completely different result:
# CentOS VM
$ dd if=/dev/zero of=TEST02 bs=1M count=3000 oflag=direct
3000+0 records in
3000+0 records out
3145728000 bytes (3.1 GB) copied, 21.0724 s, 149 MB/s
# iostat output
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vdb 0.00 0.00 4.00 483.00 0.02 161.00 677.13 2.90 5.96 0.00 6.01 1.99 97.10
This test shows the *expected* performance in this small oVirt setup.
Notice how iowait remains healthy, although the throughput is 7x higher now.
I think this 2nd test may prove multiple things: the NFS storage is fast
enough and there's no networking/switch issue either.
Still, under normal conditions WRITE/READ operations are really slow and
iowait goes through the roof.
Do these results make sense to anyone? Any hints how to find what's wrong here?
Any tests I should run or sysctls/tunables that would make sense?
FWIW, iperf result looks good between the oVirt Node and the NFS storage:
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 10.9 GBytes 9.36 Gbits/sec
Regards
- Frank
5 years, 8 months
Vagrant Plugin
by Jeremy Tourville
I am having some trouble getting the Ovirt Vagrant plugin working. I was able to get Vagrant installed and could even run the example scenario listed in the blog. https://www.ovirt.org/blog/2017/02/using-oVirt-vagrant.html
My real issue is getting a vm generated by the SecGen project https://github.com/SecGen/SecGen to come up. If I use the VirtualBox provider everything works as expected and I can launch the vm with vagrant up. If I try to run using Ovirt provider it fails.
I had originally posted this over in Google groups / Vagrant forums and it was suggested to take it to Ovirt. Hopefully, somebody here has some insights.
The process fails quickly with the following output. Can anyone give some suggestions on how to fix the issue? I have also included a copy of my vagrantfile below. Thanks in advance for your assistance!
***Output***
Bringing machine 'escalation' up with 'ovirt4' provider...
==> escalation: Creating VM with the following settings...
==> escalation: -- Name: SecGen-default-scenario-escalation
==> escalation: -- Cluster: default
==> escalation: -- Template: debian_stretch_server_291118
==> escalation: -- Console Type: spice
==> escalation: -- Memory:
==> escalation: ---- Memory: 512 MB
==> escalation: ---- Maximum: 512 MB
==> escalation: ---- Guaranteed: 512 MB
==> escalation: -- Cpu:
==> escalation: ---- Cores: 1
==> escalation: ---- Sockets: 1
==> escalation: ---- Threads: 1
==> escalation: -- Cloud-Init: false
==> escalation: An error occured. Recovering..
==> escalation: VM is not created. Please run `vagrant up` first.
/home/secgenadmin/.vagrant.d/gems/2.4.4/gems/ovirt-engine-sdk-4.0.12/lib/ovirtsdk4/service.rb:52:in `raise_error': Fault reason is "Operation Failed". Fault detail is "Entity not found: Cluster: name=default". HTTP response code is 404. (OvirtSDK4::Error)
from /home/secgenadmin/.vagrant.d/gems/2.4.4/gems/ovirt-engine-sdk-4.0.12/lib/ovirtsdk4/service.rb:67:in `check_fault'
from /home/secgenadmin/.vagrant.d/gems/2.4.4/gems/ovirt-engine-sdk-4.0.12/lib/ovirtsdk4/services.rb:35570:in `add'
from /home/secgenadmin/.vagrant.d/gems/2.4.4/gems/vagrant-ovirt4-1.2.2/lib/vagrant-ovirt4/action/create_vm.rb:67:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/before_trigger.rb:23:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/after_trigger.rb:26:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /home/secgenadmin/.vagrant.d/gems/2.4.4/gems/vagrant-ovirt4-1.2.2/lib/vagrant-ovirt4/action/set_name_of_domain.rb:17:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/before_trigger.rb:23:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:121:in `block in finalize_action'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builder.rb:116:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/runner.rb:102:in `block in run'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/util/busy.rb:19:in `busy'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/runner.rb:102:in `run'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/call.rb:53:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/before_trigger.rb:23:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/after_trigger.rb:26:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /home/secgenadmin/.vagrant.d/gems/2.4.4/gems/vagrant-ovirt4-1.2.2/lib/vagrant-ovirt4/action/connect_ovirt.rb:31:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/before_trigger.rb:23:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/after_trigger.rb:26:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builtin/before_trigger.rb:23:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/warden.rb:50:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/builder.rb:116:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/runner.rb:102:in `block in run'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/util/busy.rb:19:in `busy'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/action/runner.rb:102:in `run'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/machine.rb:238:in `action_raw'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/machine.rb:209:in `block in action'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/environment.rb:615:in `lock'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/machine.rb:195:in `call'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/machine.rb:195:in `action'
from /opt/vagrant/embedded/gems/2.2.4/gems/vagrant-2.2.4/lib/vagrant/batch_action.rb:82:in `block (2 levels) in run'
Non-zero exit status...
Not going to destroy escalation, since it does not exist
Failed. Not retrying. Please refer to the error above.
**Vagrantfile***
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define "escalation" do |escalation|
escalation.vm.provider :ovirt4 do |ovirt|
ovirt.username = 'admin@internal'
ovirt.password = 'my_password'
ovirt.url = 'https://engine.cyber-range.lan/ovirt-engine/api'
ovirt.cluster = 'default'
ovirt.template = 'debian_stretch_server_291118'
ovirt.memory_size = '512 MB'
ovirt.console = 'spice'
ovirt.insecure = true
ovirt.debug = true
end
escalation.vm.provision 'shell', inline: "echo 'datasource_list: [ None ] '> /etc/cloud/cloud.cfg.d/90_dpkg.cfg"
escalation.vm.hostname = 'SecGen-default-scenario-escalation'
escalation.vm.box = 'ovirt4'
escalation.vm.box_url = 'https://github.com/myoung34/vagrant-ovirt4/blob/master/example_box/dummy....'
escalation.vm.provision "puppet" do | modules_utilities_unix_update_unix_update |
modules_utilities_unix_update_unix_update.module_path = "puppet/escalation/modules"
modules_utilities_unix_update_unix_update.environment_path = "environments/"
modules_utilities_unix_update_unix_update.environment = "production"
modules_utilities_unix_update_unix_update.synced_folder_type = "rsync"
modules_utilities_unix_update_unix_update.manifests_path = "puppet/escalation/modules/unix_update"
modules_utilities_unix_update_unix_update.manifest_file = "unix_update.pp"
end
escalation.vm.provision "puppet" do | modules_vulnerabilities_unix_misc_distcc_exec |
modules_vulnerabilities_unix_misc_distcc_exec.facter = {
"base64_inputs_file" => '94bad9a7b0c9aa1462eb1d4d8277a712',
}
modules_vulnerabilities_unix_misc_distcc_exec.module_path = "puppet/escalation/modules"
modules_vulnerabilities_unix_misc_distcc_exec.environment_path = "environments/"
modules_vulnerabilities_unix_misc_distcc_exec.environment = "production"
modules_vulnerabilities_unix_misc_distcc_exec.synced_folder_type = "rsync"
modules_vulnerabilities_unix_misc_distcc_exec.manifests_path = "puppet/escalation/modules/distcc_exec"
modules_vulnerabilities_unix_misc_distcc_exec.manifest_file = "distcc_exec.pp"
end
escalation.vm.provision "puppet" do | modules_vulnerabilities_unix_local_chkrootkit |
modules_vulnerabilities_unix_local_chkrootkit.facter = {
"base64_inputs_file" => '170197cbf03410f71d7d777e9d468382',
}
modules_vulnerabilities_unix_local_chkrootkit.module_path = "puppet/escalation/modules"
modules_vulnerabilities_unix_local_chkrootkit.environment_path = "environments/"
modules_vulnerabilities_unix_local_chkrootkit.environment = "production"
modules_vulnerabilities_unix_local_chkrootkit.synced_folder_type = "rsync"
modules_vulnerabilities_unix_local_chkrootkit.manifests_path = "puppet/escalation/modules/chkrootkit"
modules_vulnerabilities_unix_local_chkrootkit.manifest_file = "chkrootkit.pp"
end
escalation.vm.provision "puppet" do | modules_services_unix_nfs_nfs_share |
modules_services_unix_nfs_nfs_share.facter = {
"base64_inputs_file" => '98ddf2a16a97f46fd4856a509df9b75a',
}
modules_services_unix_nfs_nfs_share.module_path = "puppet/escalation/modules"
modules_services_unix_nfs_nfs_share.environment_path = "environments/"
modules_services_unix_nfs_nfs_share.environment = "production"
modules_services_unix_nfs_nfs_share.synced_folder_type = "rsync"
modules_services_unix_nfs_nfs_share.manifests_path = "puppet/escalation/modules/nfs_share"
modules_services_unix_nfs_nfs_share.manifest_file = "nfs_share.pp"
end
escalation.vm.network :private_network, type: "dhcp", :ovirt__network_name => 'ovirtmgmt'
end
end
5 years, 8 months
oVirt 4.2.8 - guest Windows 10 - oVirt tools - no mouse pointer
by Timmi
Hi guys,
I have a problem with some of my Windows 10 VMs.
There is no mouse pointer in case I'm connecting to the console through
the virt-viewer.
I have also other Windows 10 VMs which don't show the same issue.
The mouse pointer is working as soon as I uninstall the oVirt tools.
Any ideas?
Best regards
Timmi
5 years, 8 months
ovirt "discard after delete" storage access request
by Simon Coter
Hi experts,
how can I find a storage that supports “Discard” option for our oVirt storage domains ?
I mean, I’ve tried more storage models and I always get:
Discard is not supported by underlying storage.
This is an example I’ve did on a very old storage (HP MSA):
Thanks in advance for your help.
Simon
5 years, 8 months