Ovirt 3:6 to 4:2 migration
by techieim
Hello Team Ovirt,
We are running Ovirt3.8 version in our prod, we want to upgrade to 3.8 to
4.2.
At present we have 2 (hyp A & hyp B)physical host ovirt 3.8 is running with
self hosted engine with data domain and iso one cluster and data center.
What we were thing do this migration in following fashion.
Plan A.
1. Attach export domain present infra.
2. Take backup all of running vm as export.
3. Detach export domain
4. Detach hyp B from cluster and data center
5. Install fresh ovirt 4.2 iso on hyp B
6. Install self hosted engine
7. Attach Data Domain
8. Attach Export Domain which was use in ovirt 3.8
9. Import all vm's from there to new ovirt 4.2
10. Once all the vm's got up and running poweroff hyp A
11. Install fresh ovirt 4.2 iso on hyp A and attach cluster.
Plan B.
1. Attach export domain present infra.
2. Take backup all of running vm as export.
3. Detach export domain
4. Detach hyp B from cluster and data center
5. Install fresh ovirt 4.2 iso on hyp B
6. Install self hosted engine
7. Attach Data Domain
8. Export from ovirt 3.8 to ovirt 4.2
9. Once all the vm's got up and running poweroff hyp A
10. Install fresh ovirt 4.2 iso on hyp A and attach cluster.
Thank a ton in advance we may be wrong with above step help us to win this
migration.
Regards
Techieim
6 years, 5 months
Cant create ISCSI data domain
by tehnic@take3.ro
Hello all,
i try to create a new data domain on ISCSI storage following this documentation:
https://www.ovirt.org/documentation/admin-guide/chap-Storage/
Discovery of ISCSI target is working but after logging in to the target no LUNs are found or displayed, so i cant finish the procedure.
On the SPM i see an ISCSI session, and LUN 1 is attached as /dev/sdc.
The ISCSI system is a FreeNAS installation and im running ovirt 4.2.
Any suggestion how to solve this problem?
Regards, Robert
6 years, 5 months
Ovf from vm.initialization.configuration.data
by Carlos Rodrigues
Hello,
I'm working on development of script to backup and restore ovirt VMs.
I saw some examples from sdk (https://github.com/oVirt/ovirt-engine-sdk
/tree/master/sdk/examples) to backup and restore VMs, but for backup
ovf VM i'm using the vm.initialization.configuration.data and this ovf
it keeps the snapshots information and don't have disks ids reference.
I would like to known if we have some method to get the ovf with disks
ids?
Regards,
--
Carlos Rodrigues
Engenheiro de Software Sénior
Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110
6 years, 5 months
Failed to Activated Host
by carl langlois
Hi
Today i had plan to start upgrading my installation to the latest 4.2. As i
was already on a 4.2.x version i have figured it will be easy .. well.guess
what i have having trouble re-activating one of the freshly upgraded host.
So here is the resume.
My setup have 3 hosts that are also part of a gluster pool. Those are the
original host that was use to setup the HA engine with glusterfs. One of
the 3 host is only used for arbiter and can also run the engine. But
nothing else can run on it because it is a fairly small machine.
I have also 6 others hosts for running vms that are not part of the gluster
pool.
So first thing i did to upgrade is set the global maintenance. Update the
Engine/restart the engine and remove global maintenance.. that went without
a glitch.
Next i choose the arbiter for the first machine to upgrade. Then set the
machine to maintenance run the upgrade and reboot the host. That went also
without a glicth.
Now here is the glitchy part. When i try to reactivate the host i got a
strange error with a gluster peer command that failed on a host that is not
in the gluster peer list.
Here is the two errors i had.
*"Gluster command [gluster peer probe ovhost3] failed on server
sfmo5002ov74."*
*"Add Host failed error: ovhost3 is either already part of another cluster
of having volumes configured return code: 3"*
One thing i do not understand is the sfmo5002ov74 is not part of the
gluster pool. It does not provide any bricks to the system.
Now i am not sure what to look for. I have stop the upgrade process until i
figure out the issue.
Any hints would be appreciated.
Thanks
Carl
6 years, 5 months
More USB device redirection
by yunsur.shi@gmail.com
We have more USB devices redir (There are 8),But oVirt supports only 6.
How should I configure it.
/etc/engine-config/engine-config.properties
NumberOfUSBSlots.validValues=0..6
This 0..6 modify 0..8
engine-config -s NumberOfUSBSlots=8
engine-config -g NumberOfUSBSlots
NumberOfUSBSlots: 8 version: general
But No entry into force
Should I do?
Thanks
6 years, 5 months
OVIRT 4.2: MountError on Posix Compliant FS
by jtorres@bsc.es
Hi all,
I've deployed the ovirt Version 4.2.4.5-1.el7 on a small cluster and I'm trying to use as datastorage domain a configured Spectrum Scale (GPFS) distributed filesystem for test it.
I've completed the configuration of the storage, and the filesystem defined are correctly mounted on the client hosts as we can see:
gpfs_kvm gpfs 233T 288M 233T 1% /gpfs/kvm
gpfs_fast gpfs 8.8T 5.2G 8.8T 1% /gpfs/fast
The content output of the mount comand is:
gpfs_kvm on /gpfs/kvm type gpfs (rw,relatime)
gpfs_fast on /gpfs/fast type gpfs (rw,relatime)
I can write and read to the mounted filesystem correctly, but when I try to add the local mounted filesystem I encountered some errors related to the mounting process of the storage.
The parameters passed to add new storage domain are:
Data Center: Default (V4)
Name: gpfs_kvm
Description: VM data on GPFS
Domain Function: Data
Storage Type: Posix Compliant FS
Host to Use: kvm1c01
Path: /gpfs/kvm
VFS Type: gpfs
Mount Options: rw, relatime
The ovirt error that I obtained is:
Error while executing action Add Storage Connection: Problem while trying to mount target
On the /var/log/vdsm/vdsm.log I can see:
2018-07-05 09:07:43,400+0200 INFO (jsonrpc/0) [vdsm.api] START connectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'mnt_options': u'rw,relatime', u'id': u'00000000-0000-0000-0000-000000000000', u'connection': u'/gpfs/kvm', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'gpfs', u'password': '********', u'port': u''}], options=None) from=::ffff:10.2.1.254,43908, flow_id=498668a0-a240-469f-ac33-f8c7bdeb481f, task_id=857460ed-5f0e-4bc6-ba54-e8f2c72e9ac2 (api:46)
2018-07-05 09:07:43,403+0200 INFO (jsonrpc/0) [storage.StorageServer.MountConnection] Creating directory u'/rhev/data-center/mnt/_gpfs_kvm' (storageServer:167)
2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.fileUtils] Creating directory: /rhev/data-center/mnt/_gpfs_kvm mode: None (fileUtils:197)
2018-07-05 09:07:43,404+0200 INFO (jsonrpc/0) [storage.Mount] mounting /gpfs/kvm at /rhev/data-center/mnt/_gpfs_kvm (mount:204)
MountError: (32, ';mount: wrong fs type, bad option, bad superblock on /gpfs/kvm,\n missing codepage or helper program, or other error\n\n In some cases useful info is found in syslog - try\n dmesg | tail or so.\n')
I know, that from previous versions on the GPFS implementation, they removed the device on /dev, due to incompatibilities with systemd. I don't know if this change affect the ovirt mounting process.
Can you help me to add this filesystem to the ovirt environment?
The parameters that I used previously are ok? Or i need to do some modification?
Its possible that the process fails because i dont have a device related to the gpfs filesystems on /dev?
Can we apply some kind of workaround to mount manually the filesystem to the ovirt environment? Ex. create the dir /rhev/data-center/mnt/_gpfs_kvm manually and then mount the /gpfs/kvm over this?
It's posible to modify the code to bypass some comprobations or something?
Reading the available documentation over Internet I find that ovirt was compatible with this fs (gpfs) implementation, because it's POSIX Compliant, this is a main reason for test it in our cluster.
It remains compatible on the actual versions? Or maybe there are changes that brokes this integration?
Many thanks for all in advance!
Kind regards!
6 years, 5 months
Re: HE + Gluster : Engine corrupted?
by Krutika Dhananjay
Hi,
So it seems some of the files in the volume have mismatching gfids. I see
the following logs from 15th June, ~8pm EDT:
<snip>
...
...
[2018-06-16 04:00:10.264690] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:941edf0c-d363-488e-a333-d12320f96480>/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:10.265861] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4411: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:11.522600] E [MSGID: 108008]
[afr-self-heal-common.c:212:afr_gfid_split_brain_source]
0-engine-replicate-0: All the bricks should be up to resolve the gfid split
barin
[2018-06-16 04:00:11.522632] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:941edf0c-d363-488e-a333-d12320f96480>/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:11.523750] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4493: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:12.864393] E [MSGID: 108008]
[afr-self-heal-common.c:212:afr_gfid_split_brain_source]
0-engine-replicate-0: All the bricks should be up to resolve the gfid split
barin
[2018-06-16 04:00:12.864426] E [MSGID: 108008]
[afr-self-heal-common.c:335:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:941edf0c-d363-488e-a333-d12320f96480>/hosted-engine.lockspace>,
6bbe6097-8520-4a61-971e-6e30c2ee0abe on engine-client-2 and
ef21a706-41cf-4519-8659-87ecde4bbfbf on engine-client-0.
[2018-06-16 04:00:12.865392] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4575: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:18.716007] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4657: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:20.553365] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4739: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:21.771698] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4821: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:23.871647] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4906: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
[2018-06-16 04:00:25.034780] W [fuse-bridge.c:540:fuse_entry_cbk]
0-glusterfs-fuse: 4987: LOOKUP()
/c65e03f0-d553-4d5d-ba4f-9d378c153b9b/ha_agent/hosted-engine.lockspace =>
-1 (Input/output error)
...
...
</snip>
Adding Ravi who works on replicate component to hep resolve the mismatches.
-Krutika
On Mon, Jul 2, 2018 at 12:27 PM, Krutika Dhananjay <kdhananj(a)redhat.com>
wrote:
> Hi,
>
> Sorry, I was out sick on Friday. I am looking into the logs. Will get back
> to you in some time.
>
> -Krutika
>
> On Fri, Jun 29, 2018 at 7:47 PM, Hanson Turner <hanson(a)andrewswireless.net
> > wrote:
>
>> Hi Krutika,
>>
>> Did you need any other logs?
>>
>>
>> Thanks,
>>
>> Hanson
>>
>> On 06/27/2018 02:04 PM, Hanson Turner wrote:
>>
>> Hi Krutika,
>>
>> Looking at the email spams, it looks like it started at 8:04PM EDT on Jun
>> 15 2018.
>>
>> From my memory, I think the cluster was working fine until sometime that
>> night. Somewhere between midnight and the next (Saturday) morning, the
>> engine crashed and all vm's stopped.
>>
>> I do have nightly backups that ran every night, using the engine-backup
>> command. Looks like my last valid backup was 2018-06-15.
>>
>> I've included all logs I think might be of use. Please forgive the use of
>> 7zip, as the raw logs took 50mb which is greater than my attachment limit.
>>
>> I think the just of what happened, is we had a downed node for a period
>> of time. Earlier that day, the node was brought back into service. Later
>> that night or early the next morning, the engine was gone and hopping from
>> node to node.
>>
>> I have tried to mount the engine's hdd file to see if I could fix it.
>> There are a few corrupted partitions, and those are xfs formatted. Trying
>> to mount gives me issues about needing repaired, trying to repair gives me
>> issues about needing something cleaned first. I cannot remember exactly
>> what it was, but it wanted me to run a command that ended -L to clear out
>> the logs. I said no way and have left the engine vm in a powered down
>> state, as well as the cluster in global maintenance.
>>
>> I can see no sign of the vm booting, (ie no networking) except for what
>> I've described earlier in the VNC session.
>>
>>
>> Thanks,
>>
>> Hanson
>>
>>
>>
>> On 06/27/2018 12:04 PM, Krutika Dhananjay wrote:
>>
>> Yeah, complete logs would help. Also let me know when you saw this issue
>> - data and approx time (do specify the timezone as well).
>>
>> -Krutika
>>
>> On Wed, Jun 27, 2018 at 7:00 PM, Hanson Turner <
>> hanson(a)andrewswireless.net> wrote:
>>
>>> #more rhev-data-center-mnt-glusterSD-ovirtnode1.abcxyzdomains.net\
>>> :_engine.log
>>> [2018-06-24 07:39:12.161323] I [glusterfsd-mgmt.c:1888:mgmt_getspec_cbk]
>>> 0-glusterfs: No change in volfile,continuing
>>>
>>> # more gluster_bricks-engine-engine.log
>>> [2018-06-24 07:39:14.194222] I [glusterfsd-mgmt.c:1888:mgmt_getspec_cbk]
>>> 0-glusterfs: No change in volfile,continuing
>>> [2018-06-24 19:58:28.608469] E [MSGID: 101063]
>>> [event-epoll.c:551:event_dispatch_epoll_handler] 0-epoll: stale fd
>>> found on idx=12, gen=1, events=1, slot->gen=3
>>> [2018-06-25 14:24:19.716822] I [addr.c:55:compare_addr_and_update]
>>> 0-/gluster_bricks/engine/engine: allowed = "*", received addr =
>>> "192.168.0.57"
>>> [2018-06-25 14:24:19.716868] I [MSGID: 115029]
>>> [server-handshake.c:793:server_setvolume] 0-engine-server: accepted
>>> client from CTX_ID:79b9d5b7-0bbb-4d67-87cf-11e27dfb6c1d-GRAPH_ID:0-PID:9
>>> 901-HOST:sp3Kali-PC_NAME:engine-client-0-RECON_NO:-0 (version: 4.0.2)
>>> [2018-06-25 14:45:35.061350] I [MSGID: 115036]
>>> [server.c:527:server_rpc_notify] 0-engine-server: disconnecting
>>> connection from CTX_ID:79b9d5b7-0bbb-4d67-87cf
>>> -11e27dfb6c1d-GRAPH_ID:0-PID:9901-HOST:sp3Kali-PC_NAME:engin
>>> e-client-0-RECON_NO:-0
>>> [2018-06-25 14:45:35.061415] I [MSGID: 115013]
>>> [server-helpers.c:289:do_fd_cleanup] 0-engine-server: fd cleanup on
>>> /c65e03f0-d553-4d5d-ba4f-9d378c153b9b/images/82cde976-0650-4
>>> db9-9487-e2b52ffe25ee/e53806d9-3de5-4b26-aadc-157d745a9e0a
>>> [2018-06-25 14:45:35.062290] I [MSGID: 101055]
>>> [client_t.c:443:gf_client_unref] 0-engine-server: Shutting down
>>> connection CTX_ID:79b9d5b7-0bbb-4d67-87cf-11e27dfb6c1d-GRAPH_ID:0-PID:9
>>> 901-HOST:sp3Kali-PC_NAME:engine-client-0-RECON_NO:-0
>>> [2018-06-25 14:46:34.284195] I [MSGID: 115036]
>>> [server.c:527:server_rpc_notify] 0-engine-server: disconnecting
>>> connection from CTX_ID:13e88614-31e8-4618-9f7f
>>> -067750f5971e-GRAPH_ID:0-PID:2615-HOST:workbench-PC_NAME:eng
>>> ine-client-0-RECON_NO:-0
>>> [2018-06-25 14:46:34.284546] I [MSGID: 101055]
>>> [client_t.c:443:gf_client_unref] 0-engine-server: Shutting down
>>> connection CTX_ID:13e88614-31e8-4618-9f7f-067750f5971e-GRAPH_ID:0-PID:2
>>> 615-HOST:workbench-PC_NAME:engine-client-0-RECON_NO:-0
>>>
>>>
>>> # gluster volume info engine
>>>
>>> Volume Name: engine
>>> Type: Replicate
>>> Volume ID: c8dc1b04-bc25-4e97-81bb-4d94929918b1
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirtnode1.core.abcxyzdomains.net:/gluster_bricks/engine/engine
>>> Brick2: ovirtnode3.core.abcxyzdomains.net:/gluster_bricks/engine/engine
>>> Brick3: ovirtnode4.core.abcxyzdomains.net:/gluster_bricks/engine/engine
>>> Options Reconfigured:
>>> performance.strict-write-ordering: off
>>> server.event-threads: 4
>>> client.event-threads: 4
>>> features.shard-block-size: 512MB
>>> cluster.granular-entry-heal: enable
>>> performance.strict-o-direct: off
>>> network.ping-timeout: 30
>>> storage.owner-gid: 36
>>> storage.owner-uid: 36
>>> user.cifs: off
>>> features.shard: on
>>> cluster.shd-wait-qlength: 10000
>>> cluster.shd-max-threads: 8
>>> cluster.locking-scheme: granular
>>> cluster.data-self-heal-algorithm: full
>>> cluster.server-quorum-type: server
>>> cluster.quorum-type: auto
>>> cluster.eager-lock: enable
>>> network.remote-dio: off
>>> performance.low-prio-threads: 32
>>> performance.io-cache: off
>>> performance.read-ahead: off
>>> performance.quick-read: off
>>> transport.address-family: inet
>>> nfs.disable: on
>>> performance.client-io-threads: off
>>>
>>> # gluster --version
>>> glusterfs 3.12.9
>>> Repository revision: git://git.gluster.org/glusterfs.git
>>> Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
>>> <https://www.gluster.org/>
>>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>>> It is licensed to you under your choice of the GNU Lesser
>>> General Public License, version 3 or any later version (LGPLv3
>>> or later), or the GNU General Public License, version 2 (GPLv2),
>>> in all cases as published by the Free Software Foundation.
>>>
>>> Let me know if you want log further back, I can attach and send directly
>>> to you.
>>>
>>> Thanks,
>>>
>>> Hanson
>>>
>>>
>>>
>>> On 06/26/2018 12:30 AM, Krutika Dhananjay wrote:
>>>
>>> Could you share the gluster mount and brick logs? You'll find them
>>> under /var/log/glusterfs.
>>> Also, what's the version of gluster you're using?
>>> Also, output of `gluster volume info <ENGINE_VOLNAME>`?
>>>
>>> -Krutika
>>>
>>> On Thu, Jun 21, 2018 at 9:50 AM, Sahina Bose <sabose(a)redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Jun 20, 2018 at 11:33 PM, Hanson Turner <
>>>> hanson(a)andrewswireless.net> wrote:
>>>>
>>>>> Hi Benny,
>>>>>
>>>>> Who should I be reaching out to for help with a gluster based hosted
>>>>> engine corruption?
>>>>>
>>>>
>>>>
>>>> Krutika, could you help?
>>>>
>>>>
>>>>>
>>>>> --== Host 1 status ==--
>>>>>
>>>>> conf_on_shared_storage : True
>>>>> Status up-to-date : True
>>>>> Hostname : ovirtnode1.abcxyzdomains.net
>>>>> Host ID : 1
>>>>> Engine status : {"reason": "failed liveliness
>>>>> check", "health": "bad", "vm": "up", "detail": "Up"}
>>>>> Score : 3400
>>>>> stopped : False
>>>>> Local maintenance : False
>>>>> crc32 : 92254a68
>>>>> local_conf_timestamp : 115910
>>>>> Host timestamp : 115910
>>>>> Extra metadata (valid at timestamp):
>>>>> metadata_parse_version=1
>>>>> metadata_feature_version=1
>>>>> timestamp=115910 (Mon Jun 18 09:43:20 2018)
>>>>> host-id=1
>>>>> score=3400
>>>>> vm_conf_refresh_time=115910 (Mon Jun 18 09:43:20 2018)
>>>>> conf_on_shared_storage=True
>>>>> maintenance=False
>>>>> state=GlobalMaintenance
>>>>> stopped=False
>>>>>
>>>>>
>>>>> My when I VNC into my HE, All I get is:
>>>>> Probing EDD (edd=off to disable)... ok
>>>>>
>>>>>
>>>>> So, that's why it's failing the liveliness check... I cannot get the
>>>>> screen on HE to change short of ctl-alt-del which will reboot the HE.
>>>>> I do have backups for the HE that are/were run on a nightly basis.
>>>>>
>>>>> If the cluster was left alone, the HE vm would bounce from machine to
>>>>> machine trying to boot. This is why the cluster is in maintenance mode.
>>>>> One of the nodes was down for a period of time and brought back,
>>>>> sometime through the night, which is when the automated backup kicks, the
>>>>> HE started bouncing around. Got nearly 1000 emails.
>>>>>
>>>>> This seems to be the same error (but may not be the same cause) as
>>>>> listed here:
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1569827
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Hanson
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list -- users(a)ovirt.org
>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct: https://www.ovirt.org/communit
>>>>> y/about/community-guidelines/
>>>>> List Archives: https://lists.ovirt.org/archiv
>>>>> es/list/users(a)ovirt.org/message/3NLA2URX3KN44FGFUVV4N5EJBPICABHH/
>>>>>
>>>>>
>>>>
>>>
>>>
>>
>>
>>
>
6 years, 5 months
Ovirt system crash when starting VM with Quadro GPU
by Wesley Stewart
I have recently been using an old Radeon R9 270X lying around without
issue, until this card seemed to have kicked the bucket.
I had an Nvidia Quadro P2000 lying around I thought I would try. After
installing it into my system, it showed up in the PCI device as
anticipated. I based both devices to my Windows 10 guest.
It shows up a a GP106GL and a separate audio component. The same thing
that happened with the AMD Radeon Card. The events just shows the VM's
being terminated without giving a reason, and I am hoping for some
guidance.
Currently using a single node setup, running ovirt 4.2.3. Any ideas or
places to check would be much appreciated!
6 years, 5 months
ovirt upgrade 4.1 - 4.2 fail
by Альбов Леонид
Hello!
After upgrade engin from 4.1 to 4.2 all Host in status Activate or Non Operational.
Error Message:
VDSM cluster-nodeXX command GetCapabilitiesAsyncVDS failed: General SSLEngine problem
Need help!
(Centos 7)
engine.log
2018-07-18 17:37:46,903+03 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to cluster-node11.office.vliga/10.252.252.211
2018-07-18 17:37:46,905+03 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages General SSLEngine problem
2018-07-18 17:37:46,906+03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: General SSLEngine problem
2018-07-18 17:37:47,744+03 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to cluster-node10.office.vliga/10.252.252.210
2018-07-18 17:37:47,747+03 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages General SSLEngine problem
2018-07-18 17:37:47,747+03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: General SSLEngine problem
2018-07-18 17:37:52,814+03 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to cluster-node6.office.vliga/10.252.252.206
2018-07-18 17:37:52,816+03 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages General SSLEngine problem
2018-07-18 17:37:52,816+03 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-55) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: General SSLEngine problem
2018-07-18 17:37:59,513+03 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to cluster-node6.office.vliga/10.252.252.206
2018-07-18 17:37:59,514+03 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages General SSLEngine problem
2018-07-18 17:37:59,517+03 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to cluster-node10.office.vliga/10.252.252.210
2018-07-18 17:37:59,519+03 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages General SSLEngine problem
--
Леонид Альбов
6 years, 5 months
Python-SDK4: Check snapshot deletion result?
by nicolas@devels.es
Hi,
We're using ovirt-engine-sdk-python 4.1.6 on oVirt 4.1.9, currently
we're trying to delete some snapshots via a script like this:
sys_serv = conn.system_service()
vms_service = sys_serv.vms_service()
vm_service = vms_service.vm_service(vmid)
snaps_service = vm_service.snapshots_service()
snaps_service.service('SNAPSHOT-ID').remove()
This works, mostly... however, sometimes the deletion fails:
Failed to delete snapshot 'snapshot name' for VM 'vm'.
Is it currently possible to know via Python-SDK that the deletion
actually failed? I know I can check the state of a snapshot, but I'd
like to check the result of the task. Is that possible somehow?
Thanks.
6 years, 5 months