Backup VMs on oVirt 4.2+ with Disk image transfer
by wodel youchi
Hi,
I am testing a product that can interface with oVirt API to backup VMs.
The product can use v3 API (for oVirt 3.5.1+) with a proxy VM on oVirt to
backup the VMs to the Export domain.
And it can use v4 API (for oVirt 4+) to backup VMs, and it offers two
methods for that :
- for oVirt 4+ it can use Disk attachment mode, this mode uses a proxy
VM on oVirt to attach the snapshot of the VM to be backed-up, and then the
proxy VM can do the backup of the disk or communicate with an external
backup software. This mode offers only full backups (pretty similar to v3).
- for oVirt 4.2+ it can use Disk image transfer, this mode does not need
a proxy VM, it exports data directly using the oVirt 4.2+ API to the backup
server, and uses snapshot-chains provided by the new RHV/oVirt, which make
it possible to have incremental VMs backups.
I cannot get any documentation about the later approach (Disk image
transfer), and my question is : what network does this approach uses to
transfer the backup stream, is it ovirtmgmt network since we're talking to
the API?
My concern is about the bandwidth of this network, especially when
backing-up huge and multiple VMs.
Regards.
6 years, 5 months
oVirt 4.2 and CLI options
by Simon Coter
Hi,
what is the best choice for a CLI interface vs oVirt 4.2 ?
I've looked for it and I saw that ovirt-shell is already deprecated.
Thanks
Simon
6 years, 5 months
Hey, guys, I have a trouble with ovirt OVN.
by Чижевский _ЕД
I just have no ping between vms on different hosts when connecting ovn
network with subnet to physical network(ovirtmgmt)?
I did this thing vdsm-tool ovn-config 172.20.139.27(ip of my engine)
172.20.139.81(ip of host, which storaging vm).
6 years, 5 months
Hosted Engine Change State, VMs Images Locked?
by Dan Lavu
So I have a few VMs that are locked and unable to start on either
hypervisor, this happened after the hosted engine for some reason switched
hosts. It seems like the imaged is locked but I'm unsure how to unlock it.
Any advice is appreciated.
Thanks,
Dan
Version
-------------
glusterfs-3.8.4-54.8.el7rhgs.x86_64
vdsm-4.20.27.2-1.el7ev.x86_64
ovirt-ansible-disaster-recovery-0.4-1.el7ev.noarch
ovirt-engine-extension-aaa-ldap-1.3.7-1.el7ev.noarch
ovirt-vmconsole-proxy-1.0.5-4.el7ev.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.2.3.8-0.1.el7.noarch
ovirt-engine-extensions-api-impl-4.2.3.8-0.1.el7.noarch
ovirt-imageio-proxy-setup-1.3.1.2-0.el7ev.noarch
ovirt-engine-extension-aaa-jdbc-1.1.7-1.el7ev.noarch
ovirt-engine-webadmin-portal-4.2.3.4-0.1.el7.noarch
ovirt-engine-backend-4.2.3.4-0.1.el7.noarch
ovirt-host-deploy-1.7.3-1.el7ev.noarch
ovirt-cockpit-sso-0.0.4-1.el7ev.noarch
ovirt-ansible-infra-1.1.5-1.el7ev.noarch
ovirt-provider-ovn-1.2.10-1.el7ev.noarch
ovirt-engine-setup-4.2.3.8-0.1.el7.noarch
ovirt-setup-lib-1.1.4-1.el7ev.noarch
ovirt-engine-dwh-4.2.2.2-1.el7ev.noarch
ovirt-js-dependencies-1.2.0-3.1.el7ev.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
ovirt-log-collector-4.2.5-2.el7ev.noarch
ovirt-ansible-v2v-conversion-host-1.1.2-1.el7ev.noarch
ovirt-ansible-cluster-upgrade-1.1.7-1.el7ev.noarch
ovirt-ansible-image-template-1.1.6-2.el7ev.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.2.3.8-0.1.el7.noarch
ovirt-engine-websocket-proxy-4.2.3.8-0.1.el7.noarch
ovirt-engine-tools-backup-4.2.3.4-0.1.el7.noarch
ovirt-engine-restapi-4.2.3.4-0.1.el7.noarch
ovirt-engine-tools-4.2.3.4-0.1.el7.noarch
ovirt-imageio-common-1.3.1.2-0.el7ev.noarch
ovirt-engine-cli-3.6.8.1-1.el7ev.noarch
ovirt-web-ui-1.3.9-1.el7ev.noarch
ovirt-ansible-manageiq-1.1.8-1.el7ev.noarch
ovirt-ansible-roles-1.1.4-2.el7ev.noarch
ovirt-engine-lib-4.2.3.8-0.1.el7.noarch
ovirt-vmconsole-1.0.5-4.el7ev.noarch
ovirt-engine-setup-base-4.2.3.8-0.1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.2.3.8-0.1.el7.noarch
ovirt-host-deploy-java-1.7.3-1.el7ev.noarch
ovirt-engine-dashboard-1.2.3-2.el7ev.noarch
ovirt-engine-4.2.3.4-0.1.el7.noarch
python-ovirt-engine-sdk4-4.2.6-1.el7ev.x86_64
ovirt-engine-metrics-1.1.4.2-1.el7ev.noarch
ovirt-engine-vmconsole-proxy-helper-4.2.3.8-0.1.el7.noarch
ovirt-imageio-proxy-1.3.1.2-0.el7ev.noarch
ovirt-engine-dwh-setup-4.2.2.2-1.el7ev.noarch
ovirt-guest-agent-common-1.0.14-3.el7ev.noarch
ovirt-ansible-vm-infra-1.1.7-1.el7ev.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.2.3.8-0.1.el7.noarch
ovirt-engine-api-explorer-0.0.1-1.el7ev.noarch
ovirt-engine-dbscripts-4.2.3.4-0.1.el7.noarch
ovirt-iso-uploader-4.2.0-1.el7ev.noarch
---
VDSM log
---------------
2018-06-06 01:07:12,940-0400 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC
call Host.getStorageRepoStats succeeded in 0.01 seconds (__init__:573)
2018-06-06 01:07:12,948-0400 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-06-06 01:07:13,068-0400 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=3e30ead8-20b6-449d-a3d3-684a9d20e2c2 (api:46)
2018-06-06 01:07:13,068-0400 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={u'f7dfffc3-9d69-4d20-83fc-c3d4324430a2': {'code': 0, 'actual':
True, 'version': 0, 'acquired': True, 'delay': '0.000482363', 'lastCheck':
'2.2', 'valid': True}, u'ca5bf4c5-43d8-4d88-ae64-78f87ce016b1': {'code': 0,
'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00143521',
'lastCheck': '2.2', 'valid': True},
u'f4e26e9a-427b-44f2-9ecf-5d789b56a1be': {'code': 0, 'actual': True,
'version': 4, 'acquired': True, 'delay': '0.000832749', 'lastCheck': '2.2',
'valid': True}, u'a4c70c2d-98f2-4394-a6fc-c087a31b21d3': {'code': 0,
'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000280917',
'lastCheck': '2.1', 'valid': True},
u'30cee3ab-83a3-4bf4-a674-023df575c3da': {'code': 0, 'actual': True,
'version': 4, 'acquired': True, 'delay': '0.00128562', 'lastCheck': '2.1',
'valid': True}} from=internal, task_id=3e30ead8-20b6-449d-a3d3-684a9d20e2c2
(api:52)
2018-06-06 01:07:13,069-0400 INFO (periodic/3) [vdsm.api] START
multipath_health() from=internal,
task_id=7064b06c-14a2-4bfd-8c31-b650918b7287 (api:46)
2018-06-06 01:07:13,069-0400 INFO (periodic/3) [vdsm.api] FINISH
multipath_health return={} from=internal,
task_id=7064b06c-14a2-4bfd-8c31-b650918b7287 (api:52)
2018-06-06 01:07:13,099-0400 INFO (vm/78754822) [root]
/usr/libexec/vdsm/hooks/before_vm_start/50_hostedengine: rc=0 err=
(hooks:110)
2018-06-06 01:07:13,350-0400 INFO (vm/78754822) [root]
/usr/libexec/vdsm/hooks/before_vm_start/50_vfio_mdev: rc=0 err= (hooks:110)
2018-06-06 01:07:13,578-0400 INFO (vm/78754822) [root]
/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd: rc=0 err= (hooks:110)
2018-06-06 01:07:13,579-0400 INFO (vm/78754822) [virt.vm]
(vmId='78754822-2bd3-4acc-a029-906b7a167c8e') <?xml version="1.0"
encoding="utf-8"?><domain type="kvm" xmlns:ns0="http://ovirt.org/vm/tune/1.0"
xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<name>idm1-runlevelone-lan</name>
<uuid>78754822-2bd3-4acc-a029-906b7a167c8e</uuid>
<memory>2097152</memory>
<currentMemory>2097152</currentMemory>
<maxMemory slots="16">8388608</maxMemory>
<vcpu current="2">16</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">oVirt</entry>
<entry name="product">RHEV Hypervisor</entry>
<entry name="version">7.5-8.el7</entry>
<entry
name="serial">30333436-3638-5355-4532-313631574337</entry>
<entry name="uuid">78754822-2bd3-4acc-a029-906b7a167c8e</entry>
</system>
</sysinfo>
<clock adjustment="0" offset="variable">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
</clock>
<features>
<acpi/>
<vmcoreinfo/>
</features>
<cpu match="exact">
<model>Nehalem</model>
<topology cores="1" sockets="16" threads="1"/>
<numa>
<cell cpus="0,1" id="0" memory="2097152"/>
</numa>
</cpu>
<cputune/>
<devices>
<input bus="ps2" type="mouse"/>
<channel type="unix">
<target name="ovirt-guest-agent.0" type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/78754822-2bd3-4acc-a029-906b7a167c8e.ovirt-guest-agent.0"/>
</channel>
<channel type="unix">
<target name="org.qemu.guest_agent.0" type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/78754822-2bd3-4acc-a029-906b7a167c8e.org.qemu.guest_agent.0"/>
</channel>
<graphics autoport="yes" passwd="*****"
passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
<channel mode="secure" name="main"/>
<channel mode="secure" name="inputs"/>
<channel mode="secure" name="cursor"/>
<channel mode="secure" name="playback"/>
<channel mode="secure" name="record"/>
<channel mode="secure" name="display"/>
<channel mode="secure" name="smartcard"/>
<channel mode="secure" name="usbredir"/>
<listen network="vdsm-ovirtmgmt" type="network"/>
</graphics>
<rng model="virtio">
<backend model="random">/dev/urandom</backend>
<alias name="ua-1b3d2efc-5605-4b5b-afde-7e75369d0191"/>
</rng>
<controller index="0" model="piix3-uhci" type="usb">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x01"
type="pci"/>
</controller>
<controller type="ide">
<address bus="0x00" domain="0x0000" function="0x1" slot="0x01"
type="pci"/>
</controller>
<controller index="0" ports="16" type="virtio-serial">
<alias name="ua-c27a9db4-39dc-436e-8b21-b2cd12aeb3dc"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x05"
type="pci"/>
</controller>
<memballoon model="virtio">
<stats period="5"/>
<alias name="ua-c82a301f-e476-4107-b954-166bbdd65f03"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x06"
type="pci"/>
</memballoon>
<controller index="0" model="virtio-scsi" type="scsi">
<alias name="ua-d8d0e95b-80e0-4d7d-91d6-4faf0f266c6e"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x04"
type="pci"/>
</controller>
<video>
<model heads="1" ram="65536" type="qxl" vgamem="16384"
vram="32768"/>
<alias name="ua-f0c36e10-652c-4fc2-87e8-737271baebca"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x02"
type="pci"/>
</video>
<channel type="spicevmc">
<target name="com.redhat.spice.0" type="virtio"/>
</channel>
<disk device="cdrom" snapshot="no" type="file">
<driver error_policy="report" name="qemu" type="raw"/>
<source file="" startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<readonly/>
<alias name="ua-74a927f8-31ac-41c1-848e-599078655d77"/>
<address bus="1" controller="0" target="0" type="drive"
unit="0"/>
<boot order="2"/>
</disk>
<disk device="disk" snapshot="no" type="file">
<target bus="scsi" dev="sda"/>
<source
file="/rhev/data-center/mnt/glusterSD/deadpool.ib.runlevelone.lan:rhev__vms/30cee3ab-83a3-4bf4-a674-023df575c3da/images/0d38d154-cbd7-491b-ac25-c96fd5fe3830/5c93d0b3-4dfa-4114-a403-09f2e8c67bfc"/>
<driver cache="none" error_policy="stop" io="threads"
name="qemu" type="raw"/>
<alias name="ua-0d38d154-cbd7-491b-ac25-c96fd5fe3830"/>
<address bus="0" controller="0" target="0" type="drive"
unit="0"/>
<boot order="1"/>
<serial>0d38d154-cbd7-491b-ac25-c96fd5fe3830</serial>
</disk>
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="lab"/>
<alias name="ua-db30b82a-c181-48cf-901f-29b568576ec7"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
type="pci"/>
<mac address="00:1a:4a:16:01:63"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
</devices>
<pm>
<suspend-to-disk enabled="no"/>
<suspend-to-mem enabled="no"/>
</pm>
<os>
<type arch="x86_64" machine="pc-i440fx-rhel7.5.0">hvm</type>
<smbios mode="sysinfo"/>
</os>
<metadata>
<ns0:qos/>
<ovirt-vm:vm>
<minGuaranteedMemoryMb type="int">1365</minGuaranteedMemoryMb>
<clusterVersion>4.2</clusterVersion>
<ovirt-vm:custom/>
<ovirt-vm:device mac_address="00:1a:4a:16:01:63">
<ovirt-vm:custom/>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>946fd87c-6327-11e8-b7d9-00163e751a4c</ovirt-vm:poolID>
<ovirt-vm:volumeID>5c93d0b3-4dfa-4114-a403-09f2e8c67bfc</ovirt-vm:volumeID>
<ovirt-vm:imageID>0d38d154-cbd7-491b-ac25-c96fd5fe3830</ovirt-vm:imageID>
<ovirt-vm:domainID>30cee3ab-83a3-4bf4-a674-023df575c3da</ovirt-vm:domainID>
</ovirt-vm:device>
<launchPaused>false</launchPaused>
<resumeBehavior>auto_resume</resumeBehavior>
</ovirt-vm:vm>
</metadata>
</domain> (vm:2867)
2018-06-06 01:07:14,584-0400 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC
call Host.ping2 succeeded in 0.00 seconds (__init__:573)
2018-06-06 01:07:14,590-0400 INFO (jsonrpc/4) [api.virt] START getStats()
from=::1,60908, vmId=d237b932-35fa-4b98-97e2-cb0afce1b3a8 (api:46)
2018-06-06 01:07:14,590-0400 INFO (jsonrpc/4) [api] FINISH getStats
error=Virtual machine does not exist: {'vmId':
u'd237b932-35fa-4b98-97e2-cb0afce1b3a8'} (api:127)
2018-06-06 01:07:14,590-0400 INFO (jsonrpc/4) [api.virt] FINISH getStats
return={'status': {'message': "Virtual machine does not exist: {'vmId':
u'd237b932-35fa-4b98-97e2-cb0afce1b3a8'}", 'code': 1}} from=::1,60908,
vmId=d237b932-35fa-4b98-97e2-cb0afce1b3a8 (api:52)
2018-06-06 01:07:14,591-0400 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC
call VM.getStats failed (error 1) in 0.00 seconds (__init__:573)
2018-06-06 01:07:14,675-0400 INFO (jsonrpc/0) [api.host] START
getAllVmStats() from=::1,60914 (api:46)
2018-06-06 01:07:14,677-0400 INFO (jsonrpc/0) [api.host] FINISH
getAllVmStats return={'status': {'message': 'Done', 'code': 0},
'statsList': (suppressed)} from=::1,60914 (api:52)
2018-06-06 01:07:14,678-0400 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getAllVmStats succeeded in 0.00 seconds (__init__:573)
2018-06-06 01:07:15,557-0400 ERROR (vm/78754822) [virt.vm]
(vmId='78754822-2bd3-4acc-a029-906b7a167c8e') The vm start process failed
(vm:943)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 872, in
_startUnderlyingVm
self._run()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2872, in
_run
dom.createWithFlags(flags)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py",
line 130, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 92,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirtError: internal error: qemu unexpectedly closed the monitor:
2018-06-06T05:07:14.703253Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config, ability to start up with partial NUMA
mappings is obsoleted and will be removed in future
2018-06-06T05:07:14.798631Z qemu-kvm: -device
scsi-hd,bus=ua-d8d0e95b-80e0-4d7d-91d6-4faf0f266c6e.0,channel=0,scsi-id=0,lun=0,drive=drive-ua-0d38d154-cbd7-491b-ac25-c96fd5fe3830,id=ua-0d38d154-cbd7-491b-ac25-c96fd5fe3830,bootindex=1:
Failed to get shared "write" lock
Is another process using the image?
6 years, 5 months
Storage IO
by Thomas Fecke
Hey Guys,
sorry i need to ask again.
We got 2 Hypervisor with about 50 running VM´s and a single Storage with 10 Gig connection.
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 3,00 694,00 1627,00 947,00 103812,00 61208,00 128,22 6,78 2,63 2,13 3,49 0,39 99,70
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 3,70 31,37 0,00 64,93
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 1,00 805,00 836,00 997,00 43916,00 57900,00 111,09 6,00 3,27 1,87 4,44 0,54 99,30
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 3,54 29,96 0,00 66,50
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 2,00 822,00 1160,00 1170,00 46700,00 52176,00 84,87 5,68 2,44 1,57 3,30 0,43 99,50
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 5,05 31,46 0,00 63,50
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 3,00 1248,00 2337,00 1502,00 134932,00 48536,00 95,58 6,59 1,72 1,53 2,01 0,26 99,30
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 3,95 31,79 0,00 64,26
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0,00 704,00 556,00 1292,00 19908,00 72600,00 100,12 5,50 2,99 1,83 3,48 0,54 99,50
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 3,03 28,90 0,00 68,07
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0,00 544,00 278,00 1095,00 7848,00 66124,00 107,75 5,31 3,87 1,49 4,47 0,72 99,10
avg-cpu: %user %nice %system %iowait %steal %idle
0,00 0,00 3,03 29,32 0,00 67,65
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0,00 464,00 229,00 1172,00 6588,00 72384,00 112,74 5,44 3,88 1,67 4,31 0,71 99,50
and this is our Problem. Anyone know why our Storage recive that much of Precesses?
Thanks in advance
6 years, 5 months
Ovirt Node NG (4.2.3.1-0.20180530) Boot fail ISCSI using ibft after installation
by Ralf Schenk
Hello,
I successfully installed Ovirt Node NG from ISO to an ISCSI target
attached via first network interface by using following extensions to
the grub cmdline:
"rd.iscsi.ibft=1 ip=ibft ip=eno2:dhcp"
I want to use the server as diskless ovirt-node-ng Server.
After successful install the system reboots and starts up but it fails
later in dracut even having detected correctly the disk and all the LV's.
I think "iscsistart" is run multiple times even after already being
logged in to the ISCSI-Target and that fails finally like that:
*[ 147.644872] localhost dracut-initqueue[1075]: iscsistart: initiator
reported error (15 - session exists)*
[ 147.645588] localhost dracut-initqueue[1075]: iscsistart: Logging
into iqn.2018-01.de.databay.office:storage01.epycdphv02-disk1
172.16.1.3:3260,1
[ 147.651027] localhost dracut-initqueue[1075]: Warning: 'iscsistart -b
' failed with return code 0
[ 147.807510] localhost systemd[1]: Starting Login iSCSI Target...
[ 147.809293] localhost iscsistart[6716]: iscsistart: TargetName not
set. Exiting iscsistart
[ 147.813625] localhost systemd[1]: iscsistart_iscsi.service: main
process exited, code=exited, status=7/NOTRUNNING
[ 147.824897] localhost systemd[1]: Failed to start Login iSCSI Target.
[ 147.825050] localhost systemd[1]: Unit iscsistart_iscsi.service
entered failed state.
[ 147.825185] localhost systemd[1]: iscsistart_iscsi.service failed.
After a long timeout dracut drops to a shell.
I attach my shortended and cleaned rdsosreport.txt. Can someone help me
find a workaround ?
Bye
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
6 years, 5 months
High latency events and events history settings
by Gianluca Cecchi
Hello,
I'm going to debug some spot events I have in my iSCSI connection frm
hypervisors.
Sometimes I get
May 31, 2018, 7:14:51 AM
Storage domain ovsd3750 experienced a high latency of 8.96043 seconds from
host ov200. This may cause performance and functional issues. Please
consult your Storage Administrator.
Jun 1, 2018, 5:26:25 AM
Storage domain ovsd3750 experienced a high latency of 8.26526 seconds from
host ov301. This may cause performance and functional issues. Please
consult your Storage Administrator.
Jun 2, 2018, 5:21:37 AM
VDSM ov200 command SpmStatusVDS failed: (-202, 'Sanlock resource read
failure', 'IO timeout')
--> it seems no impact, actually, after a few seconds it becomes again SPM
Jun 3, 2018, 7:00:14 AM
Storage domain ovsd3750 experienced a high latency of 6.37818 seconds from
host ov300. This may cause performance and functional issues. Please
consult your Storage Administrator.
And yesterday a VM running on ov301 node gets paused for a few seconds.
Jun 4, 2018, 7:02:26 AM
VM dbatest3 has been paused.
Jun 4, 2018, 7:02:26 AM
VM dbatest3 has been paused due to storage I/O problem.
Jun 4, 2018, 7:02:40 AM
VM dbatest3 has recovered from paused back to up.
Some questions:
- I'm investigating with the users, but in case it is indeed this VM
causing problems on storage latency, what are my best chance to avoid it?
To change disk profile for the disks of this particular VM? Or is there
anything I can do globally?
Or any settings on storage domain itself?
What best practice? Is there a default sort of top I/O consuming barrier
pre-defined on storage access speed from VMs?
- How many days is the default history for events and how I can see it from
web admin gui or other means? Can I change this dwfault and how?
It seems I only see this possibly related parameter with engine-config
command:
EventProcessingPoolSize
(with the value of 10 at this time)
?
Any pointer to configure events' history retained settings?
Thanks in advance,
Gianluca
6 years, 5 months
Re: ovirt-node: freshly installed node: network interfaces not visible
by Ales Musil
On Thu, May 31, 2018 at 8:56 AM, Etienne Charlier <
Etienne.Charlier(a)reduspaceservices.eu> wrote:
>
> Hello Ales,
>
> Here are the engine logs ( I put the current one and the last two
> compressed ones.
>
> One more thing,
> The server doesn't stop bouncing between states ( non operational,
> activating…) It's not possible to remove it ( remove button is Always
> greyed)
>
> Have a nice day !
> Etienne
>
>
Hi Etienne,
according to log it is https://bugzilla.redhat.com/show_bug.cgi?id=1570388
which should be resolved in 4.2.3.
What is your engine version?
Regards,
Ales
> ------------------------------
> *De :* Ales Musil <amusil(a)redhat.com>
> *Envoyé :* mercredi 30 mai 2018 12:50
>
> *À :* Etienne Charlier
> *Cc :* users
> *Objet :* Re: [ovirt-users] Re: ovirt-node: freshly installed node:
> network interfaces not visible
>
>
>
> On Wed, May 30, 2018 at 10:14 AM, Etienne Charlier <Etienne.Charlier@
> reduspaceservices.eu> wrote:
>
>> Hello, Thanks for the support
>>
>>
>> log are only sent to you , not to the list !!!
>>
>>
>> Kind Regards,
>>
>> Etienne
>>
>>
> Can you also please send the engine log?
>
> Regards,
> Ales
>
>
>>
>> ------------------------------
>> *De :* Ales Musil <amusil(a)redhat.com>
>> *Envoyé :* mercredi 30 mai 2018 09:02
>> *À :* Etienne Charlier
>> *Cc :* users
>> *Objet :* Re: [ovirt-users] Re: ovirt-node: freshly installed node:
>> network interfaces not visible
>>
>>
>>
>> On Tue, May 29, 2018 at 8:40 AM, <etienne.charlier(a)reduspaceservices.eu>
>> wrote:
>>
>>> Hello Ales,
>>>
>>> Thanks for the answer !
>>>
>>> I tried multiple time to refresh capabilities... without success
>>>
>>> For the record, the tab named "Host Devices" is also empty
>>>
>>> Have a nice Day
>>> Etienne
>>>
>>
>> Can you please send us the vdsm and supervdsm log from the host?
>>
>>
>>
>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>> y/about/community-guidelines/
>>> List Archives: https://lists.ovirt.org/archiv
>>> es/list/users(a)ovirt.org/message/ISBFSCIX7J6INUIKJXUSI6J4DWKA326Q/
>>>
>>
>>
>>
>> --
>>
>> ALES MUSIL
>> INTERN - rhv network
>>
>> Red Hat EMEA <https://www.redhat.com/>
>>
>>
>> amusil(a)redhat.com IM: amusil
>> <https://red.ht/sig>
>>
>
>
>
> --
>
> ALES MUSIL
> INTERN - rhv network
>
> Red Hat EMEA <https://www.redhat.com/>
>
>
> amusil(a)redhat.com IM: amusil
> <https://red.ht/sig>
>
--
ALES MUSIL
INTERN - rhv network
Red Hat EMEA <https://www.redhat.com/>
amusil(a)redhat.com IM: amusil
<https://red.ht/sig>
6 years, 5 months
Microsoft Network Load Balancing
by Matthew Southwick
Any help appreciated.
I have tried any and all combinations of network filters, custom network settings on vNIC, custom "macspoof" settings on the VM's. I have read countless blogs and examples.
Could anyone please give me a simple list of items to check/change so that I can get windows network load balancing working between to VM's?
At present, I have a network vNIC profile set to "No Network Filter"
I have no custom properties on the virtual machines.
I have no Network Filter Parameters set on the virtual machine vNIC.
HOST:
Kernel Version:
3.10.0 - 862.3.2.el7.x86_64
KVM Version:
2.10.0 - 21.el7_5.3.1
LIBVIRT Version:
libvirt-3.9.0-14.el7_5.5
VDSM Version:
vdsm-4.20.27.1-1.el7.centos
ENGINE:
Software Version:4.2.3.7-1.el7
Rgds
Mat
6 years, 5 months
oVirt: single host install
by ovirt@fateknollogee.com
Use case: small sites with a minimum number of vm's.
Is there such a thing as a single host install?
Is it valid for production use?
What kind of storage?
6 years, 5 months