HA and poweroff of VM
by Gianluca Cecchi
Hello,
after setting a VM as HA I run poweroff from inside the VM.
Is it supposed to be automatically restarted in this scenario with oVirt
4.1?
In engine I see:
VM c7service is down. Exit message: User shut down from within the guest
Apparently no restart
How to test eventually HA functionality?
Tanks,
Gianluca
7 years, 9 months
What is the libcacard-ev.x86_64 ??
by Arman Khalatyan
I have a host which was updated since 3.6....
libcacard is disappeared from ovirt4.1 but it has a many dependencies which
stops to remove it:
[root@clei36 ~]# yum list installed | grep ovirt| grep 4.0
libcacard-ev.x86_64 10:2.3.0-31.el7.16.1
@ovirt-4.0
yum update libcacard-ev
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: ftp.fau.de
* epel: ftp.fau.de
* extras: ftp.fau.de
* ovirt-4.1: ftp.nluug.nl
* ovirt-4.1-epel: ftp.fau.de
* updates: mirror.eu.oneandone.net
No packages marked for update
yum remove libcacard-ev.x86_64
Loaded plugins: fastestmirror
Resolving Dependencies
--> Running transaction check
---> Package libcacard-ev.x86_64 10:2.3.0-31.el7.16.1 will be erased
--> Processing Dependency: libcacard.so.0()(64bit) for package:
10:qemu-kvm-ev-2.6.0-28.el7_3.3.1.x86_64
--> Running transaction check
---> Package qemu-kvm-ev.x86_64 10:2.6.0-28.el7_3.3.1 will be erased
--> Processing Dependency: qemu-kvm >= 1.5.3-92.el7 for package:
1:virt-v2v-1.32.7-3.el7.centos.2.x86_64
--> Processing Dependency: qemu-kvm for package:
libvirt-daemon-kvm-2.0.0-10.el7_3.4.x86_64
--> Processing Dependency: qemu-kvm-rhev >= 10:2.6.0-2 for package:
vdsm-4.19.4-1.el7.centos.x86_64
--> Running transaction check
---> Package libvirt-daemon-kvm.x86_64 0:2.0.0-10.el7_3.4 will be erased
--> Processing Dependency: libvirt-daemon-kvm >= 1.2.8-3 for package:
1:libguestfs-1.32.7-3.el7.centos.2.x86_64
---> Package vdsm.x86_64 0:4.19.4-1.el7.centos will be erased
--> Processing Dependency: vdsm = 4.19.4-1.el7.centos for package:
vdsm-hook-vmfex-dev-4.19.4-1.el7.centos.noarch
---> Package virt-v2v.x86_64 1:1.32.7-3.el7.centos.2 will be erased
--> Processing Dependency: virt-v2v for package: safelease-1.0-7.el7.x86_64
--> Running transaction check
---> Package libguestfs.x86_64 1:1.32.7-3.el7.centos.2 will be erased
--> Processing Dependency: libguestfs = 1:1.32.7-3.el7.centos.2 for
package: 1:libguestfs-tools-c-1.32.7-3.el7.centos.2.x86_64
--> Processing Dependency: libguestfs >= 1:1.28.1 for package:
libguestfs-winsupport-7.2-1.el7.x86_64
--> Processing Dependency: libguestfs.so.0()(64bit) for package:
1:libguestfs-tools-c-1.32.7-3.el7.centos.2.x86_64
---> Package safelease.x86_64 0:1.0-7.el7 will be erased
---> Package vdsm-hook-vmfex-dev.noarch 0:4.19.4-1.el7.centos will be erased
--> Running transaction check
---> Package libguestfs-tools-c.x86_64 1:1.32.7-3.el7.centos.2 will be
erased
---> Package libguestfs-winsupport.x86_64 0:7.2-1.el7 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
==============================================================================================================================================================================================================================
Package
Arch
Version
Repository Size
==============================================================================================================================================================================================================================
Removing:
libcacard-ev
x86_64
10:2.3.0-31.el7.16.1
@ovirt-4.0 47 k
Removing for dependencies:
libguestfs
x86_64
1:1.32.7-3.el7.centos.2
@updates 3.8 M
libguestfs-tools-c
x86_64
1:1.32.7-3.el7.centos.2
@updates 14 M
libguestfs-winsupport
x86_64
7.2-1.el7
@base 2.2 M
libvirt-daemon-kvm
x86_64
2.0.0-10.el7_3.4
@updates 0.0
qemu-kvm-ev
x86_64
10:2.6.0-28.el7_3.3.1
@ovirt-4.1 9.6 M
safelease
x86_64
1.0-7.el7
@centos-ovirt41-candidate 43 k
vdsm
x86_64
4.19.4-1.el7.centos
@ovirt-4.1 2.5 M
vdsm-hook-vmfex-dev
noarch
4.19.4-1.el7.centos
@ovirt-4.1 21 k
virt-v2v
x86_64
1:1.32.7-3.el7.centos.2
@updates 16 M
Transaction Summary
==============================================================================================================================================================================================================================
Remove 1 Package (+9 Dependent packages)
Installed size: 49 M
Is this ok [y/N]: n
7 years, 9 months
Re: [ovirt-users] Importing existing (dirty) storage domains
by Doug Ingham
Some interesting output from the vdsm log...
2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.StorageDomain] Resource
namespace 01_img_60455567-ad30-42e3-a9df-62fe86c7fd25 already registered
(sd:731)
2017-02-09 15:16:24,051 INFO (jsonrpc/1) [storage.StorageDomain] Resource
namespace 02_vol_60455567-ad30-42e3-a9df-62fe86c7fd25 already registered
(sd:740)
2017-02-09 15:16:24,052 INFO (jsonrpc/1) [storage.SANLock] Acquiring
Lease(name='SDM',
path=u'/rhev/data-center/mnt/glusterSD/localhost:data2/60455567-ad30-42e3-a9df-6
2fe86c7fd25/dom_md/leases', offset=1048576) for host id 1 (clusterlock:343)
2017-02-09 15:16:24,057 INFO (jsonrpc/1) [storage.SANLock] Releasing host
id for domain 60455567-ad30-42e3-a9df-62fe86c7fd25 (id: 1) (clusterlock:305)
2017-02-09 15:16:25,149 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call
GlusterHost.list succeeded in 0.17 seconds (__init__:515)
2017-02-09 15:16:25,264 INFO (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:
127.0.0.1:55060 (protocoldetector:72)
2017-02-09 15:16:25,270 INFO (Reactor thread) [ProtocolDetector.Detector]
Detected protocol stomp from ::ffff:127.0.0.1:55060 (protocoldetector:127)
2017-02-09 15:16:25,271 INFO (Reactor thread) [Broker.StompAdapter]
Processing CONNECT request (stompreactor:102)
2017-02-09 15:16:25,271 INFO (JsonRpc (StompReactor))
[Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-02-09 15:16:25,416 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call
Host.getHardwareInfo succeeded in 0.01 seconds (__init__:515)
2017-02-09 15:16:25,419 INFO (jsonrpc/6) [dispatcher] Run and protect:
repoStats(options=None) (logUtils:49)
2017-02-09 15:16:25,419 INFO (jsonrpc/6) [dispatcher] Run and protect:
repoStats, Return response: {u'e8d04da7-ad3d-4227-a45d-b5a29b2f43e5':
{'code': 0, 'actual': True
, 'version': 4, 'acquired': True, 'delay': '0.000854128', 'lastCheck':
'5.1', 'valid': True}, u'a77b8821-ff19-4d17-a3ce-a6c3a69436d5': {'code': 0,
'actual': True, 'vers
ion': 4, 'acquired': True, 'delay': '0.000966556', 'lastCheck': '2.6',
'valid': True}} (logUtils:52)
2017-02-09 15:16:25,447 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call
Host.getStats succeeded in 0.03 seconds (__init__:515)
2017-02-09 15:16:25,450 ERROR (JsonRpc (StompReactor)) [vds.dispatcher] SSL
error receiving from <yajsonrpc.betterAsyncore.Dispatcher connected
('::ffff:127.0.0.1', 55060, 0, 0) at 0x7f69c0043cf8>: unexpected eof
(betterAsyncore:113)
2017-02-09 15:16:25,812 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call
GlusterVolume.list succeeded in 0.10 seconds (__init__:515)
2017-02-09 15:16:25,940 INFO (Reactor thread)
[ProtocolDetector.AcceptorImpl] Accepted connection from ::ffff:
127.0.0.1:55062 (protocoldetector:72)
2017-02-09 15:16:25,946 INFO (Reactor thread) [ProtocolDetector.Detector]
Detected protocol stomp from ::ffff:127.0.0.1:55062 (protocoldetector:127)
2017-02-09 15:16:25,947 INFO (Reactor thread) [Broker.StompAdapter]
Processing CONNECT request (stompreactor:102)
2017-02-09 15:16:25,947 INFO (JsonRpc (StompReactor))
[Broker.StompAdapter] Subscribe command received (stompreactor:129)
2017-02-09 15:16:26,058 ERROR (jsonrpc/1) [storage.TaskManager.Task]
(Task='02cad901-5fe8-4f2d-895b-14184f67feab') Unexpected error (task:870)
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 877, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in
wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 812, in
forcedDetachStorageDomain
self._deatchStorageDomainFromOldPools(sdUUID)
File "/usr/share/vdsm/storage/hsm.py", line 790, in
_deatchStorageDomainFromOldPools
dom.acquireClusterLock(host_id)
File "/usr/share/vdsm/storage/sd.py", line 810, in acquireClusterLock
self._manifest.acquireDomainLock(hostID)
File "/usr/share/vdsm/storage/sd.py", line 499, in acquireDomainLock
self._domainLock.acquire(hostID, self.getDomainLease())
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
362, in acquire
"Cannot acquire %s" % (lease,), str(e))
AcquireLockFailure: Cannot obtain lock:
u"id=60455567-ad30-42e3-a9df-62fe86c7fd25, rc=5, out=Cannot acquire
Lease(name='SDM',
path=u'/rhev/data-center/mnt/glusterSD/localhost:data2/60455567-ad30-42e3-a9df-62fe86c7fd25/dom_md/leases',
offset=1048576), err=(5, 'Sanlock resource not acquired', 'Input/output
error')"
2017-02-09 15:16:26,058 INFO (jsonrpc/1) [storage.TaskManager.Task]
(Task='02cad901-5fe8-4f2d-895b-14184f67feab') aborting: Task is aborted:
'Cannot obtain lock' - code 651 (task:1175)
2017-02-09 15:16:26,059 ERROR (jsonrpc/1) [storage.Dispatcher] {'status':
{'message': 'Cannot obtain lock: u"id=60455567-ad30-42e3-a9df-62fe86c7fd25,
rc=5, out=Cannot acquire Lease(name=\'SDM\',
path=u\'/rhev/data-center/mnt/glusterSD/localhost:data2/60455567-ad30-42e3-a9df-62fe86c7fd25/dom_md/leases\',
offset=1048576), err=(5, \'Sanlock resource not acquired\', \'Input/output
error\')"', 'code': 651}} (dispatcher:77)
2017-02-09 15:16:26,059 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call
StorageDomain.detach failed (error 651) in 23.04 seconds (__init__:515)
--
Doug
7 years, 9 months
questions on OVN
by Gianluca Cecchi
Hello,
I'm successfully testing and using OVN on a single host environment with
self hosted engine and 4.1
I'm using ip of ovirtmgmt for the host local ip used for OVN tunneling,
even if actually with a single host it is not so important...
ovirtmgmt bridge is on top ot lacp based bonding
Now I would like to apply it on another environment composed by 3 hosts.
Here ovirtmgmt bridge is on top of an active-backup bonding.
Are there any cons to put the local ip used for tunneling on ovirtmgmt ip?
Any problems/impacts on engine access/functionality using this ip for the
tunnel?
Is in genera OVN considered stable/fully usable in 4.1?
Thanks in advance,
Gianluca
7 years, 9 months
iscsi or iser targets are not detached from the host in the maintenance mode
by Arman Khalatyan
Hi,
In ovirt 4.1 when I put the host into maintenance mode then the nfs mounts
are unmounted as expected.
but the hosts are still logged into the targets.
Is it expected behavior?? If yes what is the use of it?
Another thing concerning to the permanently removed direct LUNs.
They are still in the /var/lib/iscsi/nodes and /var/lib/iscsi/send_targets/*
Would be good to cleanup the folders if users are removing permanently the
LUNs.
Thanks,
Arman.
7 years, 9 months
vdsm conf files sync
by Gianluca Cecchi
Hello,
in my 2 original 4.1 hosts I got some storage errors using rdbms machines
when restoring or doing hevy I/O.
My storage domain is FC SAN based.
I solved the problem setting this conservative settings into
/etc/vdsm/vdsm.conf.d
cat 50_thin_block_extension_rules.conf
[irs]
# Together with volume_utilization_chunk_mb, set the minimal free
# space before a thin provisioned block volume is extended. Use lower
# values to extend earlier.
volume_utilization_percent = 25
# Size of extension chunk in megabytes, and together with
# volume_utilization_percent, set the free space limit. Use higher
# values to extend in bigger chunks.
volume_utilization_chunk_mb = 4096
Then I added a third host in a second time and I wrongly supposed that an
equal vdsm configurtion would have been deployed with "New Host" from
gui....
But is not so.
Yesterday with a VM running on this third hypervisor I got the previous
experimented messages; some cycles of these
VM dbatest6 has recovered from paused back to up.
VM dbatest6 has been paused due to no Storage space error.
VM dbatest6 has been paused.
in a 2 hours period.
Two questions:
- why not align hypervisor configuration when adding host and in particular
the vdsm one? Any reason in general for having different config in hosts of
the same cluster?
- the host that was running the VM was not the SPM.
Who is in charge of applying the settings about volume extension when a VM
I/O load requires it because of a thin provisioned disk in use?
I presume not the SPM but the host that has in charge the VM, based on what
I saw yesterday...
Thanks,
Gianluca
7 years, 9 months
Requirements to use ovirt-image-repository
by Gianluca Cecchi
Hello,
is it sufficient to open outside glance port (9292) of glance.ovirt.org or
is there anything else to do?
Making a test from an environment without outside restrictions it seems
that is only the engine that connects, correct?
Thanks,
Gianluca
7 years, 9 months
Disaster Recovery Testing
by Gary Lloyd
Hi
We currently use direct lun for our virtual machines and I would like to
move away from doing this and move onto storage domains.
At the moment we are using an ISCSI SAN and we use on replicas created on
the SAN for disaster recovery.
As a test I thought I would replicate an existing storage domain's volume
(via the SAN) and try to mount again as a separate storage domain (This is
with ovirt 4.06 (cluster mode 3.6))
I can log into the iscsi disk but then nothing gets listed under Storage
Name / Storage ID (VG Name)
Should this be possible or will it not work due the the uids being
identical ?
Many Thanks
*Gary Lloyd*
________________________________________________
I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>
________________________________________________
7 years, 9 months
Exporting VM as ova
by Benjamin Alfery
Hi,
I'm using ovirt 4.0 and was wondering if it is possible to export an
existing VM to ova format (via the gui). If it's not possible via GUI,
is that possible at all?
Best Regards
Benjamin
7 years, 9 months