Add LDAP user : ERROR: null value in column "external_id" violates not-null constraint
by lucaslamy87@yahoo.fr
Hi,
I have previously configured LDAP though ovirt-engine-extension-aaa-ldap-setup.
The only working configuration was IBM Security Directory Server (the IBM Security Directory Server RFC-2307 Schema doesn't work), ladps and anonymous search user.
With this one the search and login are working fine when I test them with ovirt-engine-extensions-tool aaa.
But when I try to add a LDAP User in the User Administration Panel I get this Error message : "Error while executing action AddUser : Internal Engine Error"
None of the solutions I've found on previous threads seems to works.
Does someone have an idea please ?
Please find the logs attached.
Thank you beforehand.
Caused by: org.postgresql.util.PSQLException: ERROR: null value in column "external_id" violates not-null constraint
Detail: Failing row contains (**user info**).
Where: SQL statement "INSERT INTO users (
department,
domain,
email,
name,
note,
surname,
user_id,
username,
external_id,
namespace
)
VALUES (
v_department,
v_domain,
v_email,
v_name,
v_note,
v_surname,
v_user_id,
v_username,
v_external_id,
v_namespace
)"
PL/pgSQL function insertuser(character varying,character varying,character varying,character varying,character varying,character varying,uuid,character varying,text,character varying) line 3 at SQL state$
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2178)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
at org.postgresql.jdbc.PgCallableStatement.executeWithFlags(PgCallableStatement.java:78)
at org.postgresql.jdbc.PgPreparedStatement.execute(PgPreparedStatement.java:144)
at org.jboss.jca.adapters.jdbc.CachedPreparedStatement.execute(CachedPreparedStatement.java:303)
at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.execute(WrappedPreparedStatement.java:442)
at org.springframework.jdbc.core.JdbcTemplate.lambda$call$4(JdbcTemplate.java:1105) [spring-jdbc.jar:5.0.4.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1050) [spring-jdbc.jar:5.0.4.RELEASE]
... 162 more
2020-02-15 10:16:53,337+01 ERROR [org.ovirt.engine.core.bll.aaa.AddUserCommand] (default task-4) [222f7ca7-b669-40e0-b152-2ca898ebde09] Transaction rolled-back for command 'org.ovirt.engine.core.bll.aaa.$
2020-02-15 10:16:53,341+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-4) [222f7ca7-b669-40e0-b152-2ca898ebde09] EVENT_ID: USER_FAILED_ADD_ADUSER(327), Fail, Failed to add User 'user' to the system.
4 years, 9 months
hosted-engine --deploy fails after "Wait for the host to be up" task
by Fredy Sanchez
*Hi all,*
*[root@bric-ovirt-1 ~]# cat /etc/*release**
CentOS Linux release 7.7.1908 (Core)
*[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
Installed Packages
Name : ovirt-engine-appliance
Arch : x86_64
Version : 4.3
Release : 20191121.1.el7
Size : 1.0 G
Repo : installed
From repo : ovirt-4.3
*Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
<https://bugzilla.redhat.com/show_bug.cgi?id=1787267>. The error message
almost everywhere is some red herring message about ansible*
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
[]}, "attempts": 120, "changed": false, "deprecations": [{"msg": "The
'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the
renamed one no longer returns ansible_facts", "version": "2.13"}]}
[ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200126170315-req4qb.log
*But the "real" problem seems to be SSH related, as you can see below*
*[root@bric-ovirt-1 ovirt-engine]# pwd*
/var/log/ovirt-hosted-engine-setup/engine-logs-2020-01-26T17:19:28Z/ovirt-engine
*[root@bric-ovirt-1 ovirt-engine]# grep -i error engine.log*
2020-01-26 17:26:50,178Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-1)
[2341fd23-f0c7-4f1c-ad48-88af20c2d04b] Failed to establish session with
host 'bric-ovirt-1.corp.modmed.com': SSH session closed during connection '
root(a)bric-ovirt-1.corp.modmed.com'
2020-01-26 17:26:50,205Z ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: [Cannot add Host. Connecting to host via SSH
has failed, verify that the host is reachable (IP address, routable address
etc.) You may refer to the engine.log file for further details.]
*The funny thing is that the engine can indeed ssh to bric-ovirt-1
(physical host). See below*
*[root@bric-ovirt-1 ovirt-hosted-engine-setup]# cat /etc/hosts*
192.168.1.52 bric-ovirt-engine.corp.modmed.com # temporary entry added by
hosted-engine-setup for the bootstrap VM
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.130.0.50 bric-ovirt-engine bric-ovirt-engine.corp.modmed.com
10.130.0.51 bric-ovirt-1 bric-ovirt-1.corp.modmed.com
10.130.0.52 bric-ovirt-2 bric-ovirt-2.corp.modmed.com
10.130.0.53 bric-ovirt-3 bric-ovirt-3.corp.modmed.com
192.168.0.1 bric-ovirt-1gluster bric-ovirt-1gluster.corp.modmed.com
192.168.0.2 bric-ovirt-2gluster bric-ovirt-2gluster.corp.modmed.com
192.168.0.3 bric-ovirt-3gluster bric-ovirt-3gluster.corp.modmed.com
[root@bric-ovirt-1 ovirt-hosted-engine-setup]#
*[root@bric-ovirt-1 ~]# ssh 192.168.1.52*
Last login: Sun Jan 26 17:55:20 2020 from 192.168.1.1
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
*[root@bric-ovirt-engine ~]# ssh bric-ovirt-1*
Password:
Password:
Last failed login: Sun Jan 26 18:17:16 UTC 2020 from 192.168.1.52 on
ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Sun Jan 26 18:16:46 2020
###################################################################
# UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED #
# #
# This system is the property of Modernizing Medicine, Inc. #
# It is for authorized Company business purposes only. #
# All connections are monitored and recorded. #
# Disconnect IMMEDIATELY if you are not an authorized user! #
###################################################################
[root@bric-ovirt-1 ~]#
[root@bric-ovirt-1 ~]#
[root@bric-ovirt-1 ~]# exit
logout
Connection to bric-ovirt-1 closed.
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
*[root@bric-ovirt-engine ~]# ssh bric-ovirt-1.corp.modmed.com
<http://bric-ovirt-1.corp.modmed.com>*
Password:
Last login: Sun Jan 26 18:17:22 2020 from 192.168.1.52
###################################################################
# UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED #
# #
# This system is the property of Modernizing Medicine, Inc. #
# It is for authorized Company business purposes only. #
# All connections are monitored and recorded. #
# Disconnect IMMEDIATELY if you are not an authorized user! #
###################################################################
[root@bric-ovirt-1 ~]# exit
logout
Connection to bric-ovirt-1.corp.modmed.com closed.
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]# exit
logout
Connection to 192.168.1.52 closed.
[root@bric-ovirt-1 ~]#
*So, what gives? I already disabled all ssh security in the physical host,
and whitelisted all potential IPs from the engine using firewalld.
Regardless, the engine can ssh to the host as root :-(. Is there maybe
another user that's used for the "Wait for the host to be up" SSH test?
Yes, I tried both passwords and certificates.*
*Maybe what's really happening is that engine is not getting the right IP?
bric-ovirt-engine is supposed to get 10.130.0.50, instead it never gets
there, getting 192.168.1.52 from virbr0 in bric-ovirt-1. See below.*
--== HOST NETWORK CONFIGURATION ==--
Please indicate the gateway IP address [10.130.0.1]
Please indicate a nic to set ovirtmgmt bridge on: (p4p1, p5p1)
[p4p1]:
--== VM CONFIGURATION ==--
You may specify a unicast MAC address for the VM or accept a randomly
generated default [00:16:3e:17:1d:f8]:
How should the engine VM network be configured (DHCP,
Static)[DHCP]? static
Please enter the IP address to be used for the engine VM []:
10.130.0.50
[ INFO ] The engine VM will be configured to use 10.130.0.50/25
Please provide a comma-separated list (max 3) of IP addresses of
domain name servers for the engine VM
Engine VM DNS (leave it empty to skip) [10.130.0.2,10.130.0.3]:
Add lines for the appliance itself and for this host to
/etc/hosts on the engine VM?
Note: ensuring that this host could resolve the engine VM
hostname is still up to you
(Yes, No)[No] Yes
*[root@bric-ovirt-1 ~]# ip addr*
3: p4p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
default qlen 1000
link/ether 00:0a:f7:f1:c6:80 brd ff:ff:ff:ff:ff:ff
inet 10.130.0.51/25 brd 10.130.0.127 scope global noprefixroute p4p1
valid_lft forever preferred_lft forever
28: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:25:7b:6f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global virbr0
valid_lft forever preferred_lft forever
29: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:25:7b:6f brd ff:ff:ff:ff:ff:ff
30: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:17:1d:f8 brd ff:ff:ff:ff:ff:ff
*The newly created engine VM does remain up even after hosted-engine
--deploy errors out; just at the wrong IP. I haven't been able to make it
get its real IP. At any rate, thank you very much for taking a look at my
very long email. Any and all help would be really appreciated.*
Cheers,
--
Fredy
--
*CONFIDENTIALITY NOTICE:* This e-mail message may contain material
protected by the Health Insurance Portability and Accountability Act of
1996 and its implementing regulations and other state and federal laws and
legal privileges. This message is only for the personal and confidential
use of the individuals or organization to whom the message is addressed. If
you are an unintended recipient, you have received this message in error,
and any reading, distributing, copying or disclosure is unauthorized and
strictly prohibited. All recipients are hereby notified that any
unauthorized receipt does not waive any confidentiality obligations or
privileges. If you have received this message in error, please notify the
sender immediately at the above email address and confirm that you have
deleted or destroyed the message.
4 years, 9 months
Re: glusterfs
by Darrell Budic
Hi Eric-
Glad you got thought that part. I don’t use iscsi backed volumes for my gluster storage, so I don’t much advice for you there. I’ve cc’d the ovirt users list back in, someone there may be able to help you futher. It’s good practice to reply to the list and specific people when conversing here, so you might want to watch to be sure you don’t drop the cc: in the future.
Re: the storage master, it’s not related to where the VM disks are stored. Once you mange to get a new storage domain setup, you’ll be able to create disks on whichever domain you want, and that is how you determine what VM disk is hooked up to what. You can even have a VM with disks on multiple storage domains, can be good for high performance needs. The SDM may even move around if a domain become unavailable. You may want to check the list archives for discussion on this, I seem to recall some in the past. You also should confirm where the disks for your HA engine are located, they may be on your local raid disk instead of the iscsi disks if the SDM is on a local disk…
Good luck,
-Darrell
> On Feb 14, 2020, at 3:03 PM, <eevans(a)digitaldatatechs.com> <eevans(a)digitaldatatechs.com> wrote:
>
> I enabled gluster and reinstalled and all went well. I set it for distributed replication so I need 3 nodes. I migrated the rest of my vm's and I am installing the third node shortly.
> My biggest concern is getting the storage master on the lun it was previously set to. I get the snapshots on it so I can recover from disaster more easily.
> I need it to persistently be on the lun I designate.
> Also, I want the luns to be the gluster replication volumes but ther is no mount point in fstab on the machines.
> I am new to gluster as well so please be patient with me.
>
> Eric Evans
> Digital Data Services LLC.
> 304.660.9080
>
>
> -----Original Message-----
> From: Darrell Budic <budic(a)onholyground.com>
> Sent: Friday, February 14, 2020 2:58 PM
> To: eevans(a)digitaldatatechs.com
> Subject: Re: [ovirt-users] Re: glusterfs
>
> You don’t even need to clean everything out, unless you need to destroy your old storage to the create new gluster backing bricks. Ovirt has a feature to migrate date between storage domains you can use to move an existing VM disk to a different storage facility. Note that “reinstall” is an option on the Installation menu for hosts, you do not need to remove the host first. It will pretty much just add the vdsm-gluster components in this case, safe to use. Just put it in maintenance first.
>
> You can certainly start fresh in the manner you describe if you want.
>
>> On Feb 14, 2020, at 11:56 AM, <eevans(a)digitaldatatechs.com> <eevans(a)digitaldatatechs.com> wrote:
>>
>> I have already imported a few vm's to see how the import process would go. So, I remove vm's and the current storage domains, and the hosts, then add gluster on the main ovirt node, then add the hosts back, storage back and reimport vm's?
>> I want to make sure before I get started. My first go around with Ovirt and want to make sure before I change anything.
>>
>> Eric Evans
>> Digital Data Services LLC.
>> 304.660.9080
>>
>>
>> -----Original Message-----
>> From: Darrell Budic <budic(a)onholyground.com>
>> Sent: Friday, February 14, 2020 11:54 AM
>> To: eevans(a)digitaldatatechs.com
>> Cc: users(a)ovirt.org
>> Subject: [ovirt-users] Re: glusterfs
>>
>> You can add it in to a running ovirt cluster, it just isn’t as automatic. First you need to enable Gluster in at the cluster settings level for a new or existing cluster. Then either install/reinstall your nodes, or install gluster manually and add vdsm-gluster packages. You can create a stand alone gluster server set this way, you don’t need any vddm packages, but then you have to create volumes manually. Once you’ve got that done, you can create bricks and volumes in the GUI or by hand, and then add a new storage domain and start using it. There may be ansible for some of this, but I haven’t done it in a while and am not sure what’s available there.
>>
>> -Darrell
>>
>>> On Feb 14, 2020, at 8:22 AM, eevans(a)digitaldatatechs.com wrote:
>>>
>>> I currently have 3 nodes, one is the engine node and 2 Centos 7 hosts, and I plan to add another Centos 7 KVM host once I get all the vm's migrated. I have san storage plus the raid 5 internal disks. All OS are installed on mirrored SAS raid 1. I want to use the raid 5 vd's as exports, ISO and use the 4TB iscsi for the vm's to run on. The iscsi has snapshots hourly and over write weekly.
>>> So here is my question: I want to add glusterfs, but after further reading, that should have been done in the initial setup. I am not new to Linux, but new to Ovirt and need to know if I can implement glusterfs now or if it's a start from scratch situation. I really don't want to start over but would like the redundancy.
>>> Any advice is appreciated.
>>> Eric
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org To unsubscribe send an email to
>>> users-leave(a)ovirt.org Privacy
>>> Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7PN44O
>>> 7
>>> U2FC4WGIXQAQF3MRKUDJBWZD/
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org Privacy
>> Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/GIJMUPFK
>> PWOAXDDUKDGXO2VI2QFI3D6G/
>>
>
>
4 years, 9 months
glusterfs
by eevans@digitaldatatechs.com
I currently have 3 nodes, one is the engine node and 2 Centos 7 hosts, and I plan to add another Centos 7 KVM host once I get all the vm's migrated. I have san storage plus the raid 5 internal disks. All OS are installed on mirrored SAS raid 1. I want to use the raid 5 vd's as exports, ISO and use the 4TB iscsi for the vm's to run on. The iscsi has snapshots hourly and over write weekly.
So here is my question: I want to add glusterfs, but after further reading, that should have been done in the initial setup. I am not new to Linux, but new to Ovirt and need to know if I can implement glusterfs now or if it's a start from scratch situation. I really don't want to start over but would like the redundancy.
Any advice is appreciated.
Eric
4 years, 9 months
oVirt profesionaI suport
by Josep Manel Andrés Moscardó
Hi,
I have seen in the website that there are some companies offering
support, but from how updated some of them have their website, it
doesn't like they are updated.
Does anyone know of companies providing professional support right now?
And also if someone has experience with Bobcare.... I cannot imaging
myself asking the boss "There is a company named Bobcare that are
selling oVirt support"
Cheers.
--
Josep Manel Andrés Moscardó
Systems Engineer, IT Operations
EMBL Heidelberg
T +49 6221 387-8394
4 years, 9 months
Re: Reimport disks
by Vinícius Ferrão
Import domain will work. The VM metadata is available on the OVF_STORE container, inside the domain. So even the names and settings come back.
Them you gradually start moving the VMs to the Gluster storage.
Sent from my iPhone
> On 13 Feb 2020, at 11:42, Robert Webb <rwebb(a)ropeguru.com> wrote:
>
> Off the top of my head, would you use the "Import Domain" option?
>
> ________________________________________
> From: Christian Reiss <email(a)christian-reiss.de>
> Sent: Thursday, February 13, 2020 9:30 AM
> To: users
> Subject: [ovirt-users] Reimport disks
>
> Hey folks,
>
> I created a new cluster with a new engine, everything is green and
> running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7 hosts).
>
> I do have a backup of the /images/ directory from the old installation.
> I tried copying (and preserving user/ permissions) into the new images
> gluster dir and trying a domain -> scan to no avail.
>
> What is the correct way to introduce oVirt to "new" (or unknown) images?
>
> -Chris.
>
> --
> with kind regards,
> mit freundlichen Gruessen,
>
> Christian Reiss
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTY...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2TV6CGLZYB...
4 years, 9 months
Enabling Libgfapi in 4.3.8 - VMs won't start
by s.panicho@gmail.com
Hello, all. I have a 3-node Hyperconverged oVirt 4.3.8 cluster running on CentOS 7.7 hosts. I was investigating poor Gluster performance and heard about libgfapi, so I thought I'd give it a shot. Looking through the documentation, followed by lots of threads and BZ reports, I've done the following to enable it:
First, I shut down all VMs except the engine. Then...
On the hosts:
1. setsebool -P virt_use_glusterfs on
2. dynamic_ownership=0 in /etc/libvirt/qemu.conf
On the engine VM:
1. engine-config -s LibgfApiSupported=true --cver=4.3
2. systemctl restart ovirt-engine
VMs now fail to launch. Am I doing this correctly? I should also note that the hosts still have the Gluster domain mounted via FUSE.
Here's a relevant bit from engine.log:
2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native: Could not read qcow2 header: Invalid argument.
The full engine.log from one of the attempts:
2020-02-06 16:38:24,909Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-12) [] add VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
2020-02-06 16:38:25,010Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-12) [] Rerun VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS 'node2.ovirt.trashnet.xyz'
2020-02-06 16:38:25,091Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host node2.ovirt.trashnet.xyz.
2020-02-06 16:38:25,166Z INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] Lock Acquired to object 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]', sharedLocks=''}'
2020-02-06 16:38:25,179Z INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'}), log id: 2107f52a
2020-02-06 16:38:25,181Z INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2107f52a
2020-02-06 16:38:25,298Z INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] Running command: RunVmCommand internal: false. Entities affected : ID: df9dbac4-35c0-40ee-acd4-a1cfc959aa8b Type: VMAction group RUN_VM with role type USER
2020-02-06 16:38:25,313Z INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils] (EE-ManagedThreadFactory-engine-Thread-216) [] Emulated machine 'pc-q35-rhel7.6.0' which is different than that of the cluster is set for 'yumcache'(df9dbac4-35c0-40ee-acd4-a1cfc959aa8b)
2020-02-06 16:38:25,382Z INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{hostId='null', vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@9774a64'}), log id: 4a83911f
2020-02-06 16:38:25,417Z INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 4a83911f
2020-02-06 16:38:25,418Z INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateVDSCommand( CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id: 5e07ba66
2020-02-06 16:38:25,420Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] START, CreateBrokerVDSCommand(HostName = node1.ovirt.trashnet.xyz, CreateVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', vm='VM [yumcache]'}), log id: 1bfa03c4
2020-02-06 16:38:25,424Z INFO [org.ovirt.engine.core.vdsbroker.builder.vminfo.VmInfoBuildUtils] (EE-ManagedThreadFactory-engine-Thread-216) [] Kernel FIPS - Guid: c3465ca2-395e-4c0c-b72e-b5b7153df452 fips: false
2020-02-06 16:38:25,435Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] VM <?xml version="1.0" encoding="UTF-8"?><domain type="kvm" xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<name>yumcache</name>
<uuid>df9dbac4-35c0-40ee-acd4-a1cfc959aa8b</uuid>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<iothreads>1</iothreads>
<maxMemory slots="16">4194304</maxMemory>
<vcpu current="1">16</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">oVirt</entry>
<entry name="product">OS-NAME:</entry>
<entry name="version">OS-VERSION:</entry>
<entry name="serial">HOST-SERIAL:</entry>
<entry name="uuid">df9dbac4-35c0-40ee-acd4-a1cfc959aa8b</entry>
</system>
</sysinfo>
<clock offset="variable" adjustment="0">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact">
<model>EPYC</model>
<feature name="ibpb" policy="require"/>
<feature name="virt-ssbd" policy="require"/>
<topology cores="1" threads="1" sockets="16"/>
<numa>
<cell id="0" cpus="0" memory="1048576"/>
</numa>
</cpu>
<cputune/>
<devices>
<input type="tablet" bus="usb"/>
<channel type="unix">
<target type="virtio" name="ovirt-guest-agent.0"/>
<source mode="bind" path="/var/lib/libvirt/qemu/channels/df9dbac4-35c0-40ee-acd4-a1cfc959aa8b.ovirt-guest-agent.0"/>
</channel>
<channel type="unix">
<target type="virtio" name="org.qemu.guest_agent.0"/>
<source mode="bind" path="/var/lib/libvirt/qemu/channels/df9dbac4-35c0-40ee-acd4-a1cfc959aa8b.org.qemu.guest_agent.0"/>
</channel>
<controller type="pci" model="pcie-root-port" index="1">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci" multifunction="on"/>
</controller>
<memballoon model="virtio">
<stats period="5"/>
<alias name="ua-27c77007-3a3c-4431-958d-90fd1c7257dd"/>
<address bus="0x05" domain="0x0000" function="0x0" slot="0x00" type="pci"/>
</memballoon>
<controller type="pci" model="pcie-root-port" index="2">
<address bus="0x00" domain="0x0000" function="0x1" slot="0x02" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="9">
<address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" multifunction="on"/>
</controller>
<controller type="sata" index="0">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x1f" type="pci"/>
</controller>
<rng model="virtio">
<backend model="random">/dev/urandom</backend>
<alias name="ua-51960005-6b95-47e9-82a7-67d5e0d6cf8a"/>
</rng>
<controller type="pci" model="pcie-root-port" index="6">
<address bus="0x00" domain="0x0000" function="0x5" slot="0x02" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="15">
<address bus="0x00" domain="0x0000" function="0x6" slot="0x03" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="13">
<address bus="0x00" domain="0x0000" function="0x4" slot="0x03" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="7">
<address bus="0x00" domain="0x0000" function="0x6" slot="0x02" type="pci"/>
</controller>
<graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
<listen type="network" network="vdsm-ovirtmgmt"/>
</graphics>
<controller type="pci" model="pcie-root-port" index="16">
<address bus="0x00" domain="0x0000" function="0x7" slot="0x03" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="12">
<address bus="0x00" domain="0x0000" function="0x3" slot="0x03" type="pci"/>
</controller>
<video>
<model type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/>
<alias name="ua-8a295e96-40c3-44de-a3b0-1c4a685a5473"/>
<address bus="0x00" domain="0x0000" function="0x0" slot="0x01" type="pci"/>
</video>
<graphics type="spice" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" tlsPort="-1">
<channel name="main" mode="secure"/>
<channel name="inputs" mode="secure"/>
<channel name="cursor" mode="secure"/>
<channel name="playback" mode="secure"/>
<channel name="record" mode="secure"/>
<channel name="display" mode="secure"/>
<channel name="smartcard" mode="secure"/>
<channel name="usbredir" mode="secure"/>
<listen type="network" network="vdsm-ovirtmgmt"/>
</graphics>
<controller type="pci" model="pcie-root-port" index="5">
<address bus="0x00" domain="0x0000" function="0x4" slot="0x02" type="pci"/>
</controller>
<controller type="usb" model="qemu-xhci" index="0" ports="8">
<address bus="0x02" domain="0x0000" function="0x0" slot="0x00" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="4">
<address bus="0x00" domain="0x0000" function="0x3" slot="0x02" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="3">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x02" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="11">
<address bus="0x00" domain="0x0000" function="0x2" slot="0x03" type="pci"/>
</controller>
<controller type="scsi" model="virtio-scsi" index="0">
<driver iothread="1"/>
<alias name="ua-d0bf6fcd-7aa2-4658-b7cc-3dac259b7ad2"/>
<address bus="0x03" domain="0x0000" function="0x0" slot="0x00" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="8">
<address bus="0x00" domain="0x0000" function="0x7" slot="0x02" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="14">
<address bus="0x00" domain="0x0000" function="0x5" slot="0x03" type="pci"/>
</controller>
<controller type="pci" model="pcie-root-port" index="10">
<address bus="0x00" domain="0x0000" function="0x1" slot="0x03" type="pci"/>
</controller>
<controller type="virtio-serial" index="0" ports="16">
<address bus="0x04" domain="0x0000" function="0x0" slot="0x00" type="pci"/>
</controller>
<channel type="spicevmc">
<target type="virtio" name="com.redhat.spice.0"/>
</channel>
<controller type="pci" model="pcie-root"/>
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="vmnet"/>
<alias name="ua-ceda0ef6-9139-4e5c-8840-86fe344ecbd3"/>
<address bus="0x01" domain="0x0000" function="0x0" slot="0x00" type="pci"/>
<mac address="56:6f:91:b9:00:05"/>
<mtu size="1500"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
<disk type="file" device="cdrom" snapshot="no">
<driver name="qemu" type="raw" error_policy="report"/>
<source file="" startupPolicy="optional">
<seclabel model="dac" type="none" relabel="no"/>
</source>
<target dev="sdc" bus="sata"/>
<readonly/>
<alias name="ua-bdf99844-2d02-411b-90bb-671ee26764cb"/>
<address bus="0" controller="0" unit="2" type="drive" target="0"/>
</disk>
<disk snapshot="no" type="network" device="disk">
<target dev="sda" bus="scsi"/>
<source protocol="gluster" name="vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78">
<host name="node1.fs.trashnet.xyz" port="0"/>
<seclabel model="dac" type="none" relabel="no"/>
</source>
<driver name="qemu" discard="unmap" io="native" type="qcow2" error_policy="stop" cache="none"/>
<alias name="ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4"/>
<address bus="0" controller="0" unit="0" type="drive" target="0"/>
<boot order="1"/>
<serial>a1d56b14-6d72-4f46-a0aa-eb0870c36bc4</serial>
</disk>
<lease>
<key>df9dbac4-35c0-40ee-acd4-a1cfc959aa8b</key>
<lockspace>781717e5-1cff-43a1-b586-9941503544e8</lockspace>
<target offset="6291456" path="/rhev/data-center/mnt/glusterSD/node1.fs.trashnet.xyz:_vmstore/781717e5-1cff-43a1-b586-9941503544e8/dom_md/xleases"/>
</lease>
</devices>
<pm>
<suspend-to-disk enabled="no"/>
<suspend-to-mem enabled="no"/>
</pm>
<os>
<type arch="x86_64" machine="pc-q35-rhel7.6.0">hvm</type>
<smbios mode="sysinfo"/>
</os>
<metadata>
<ovirt-tune:qos/>
<ovirt-vm:vm>
<ovirt-vm:minGuaranteedMemoryMb type="int">512</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion>
<ovirt-vm:custom/>
<ovirt-vm:device mac_address="56:6f:91:b9:00:05">
<ovirt-vm:custom/>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>2ffaec76-462c-11ea-b155-00163e512202</ovirt-vm:poolID>
<ovirt-vm:volumeID>a2314816-7970-49ce-a80c-ab0d1cf17c78</ovirt-vm:volumeID>
<ovirt-vm:imageID>a1d56b14-6d72-4f46-a0aa-eb0870c36bc4</ovirt-vm:imageID>
<ovirt-vm:domainID>781717e5-1cff-43a1-b586-9941503544e8</ovirt-vm:domainID>
</ovirt-vm:device>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior>
</ovirt-vm:vm>
</metadata>
</domain>
2020-02-06 16:38:25,455Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, CreateBrokerVDSCommand, return: , log id: 1bfa03c4
2020-02-06 16:38:25,494Z INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] FINISH, CreateVDSCommand, return: WaitForLaunch, log id: 5e07ba66
2020-02-06 16:38:25,495Z INFO [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-216) [] Lock freed to object 'EngineLock:{exclusiveLocks='[df9dbac4-35c0-40ee-acd4-a1cfc959aa8b=VM]', sharedLocks=''}'
2020-02-06 16:38:25,533Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-216) [] EVENT_ID: USER_STARTED_VM(153), VM yumcache was started by admin@internal-authz (Host: node1.ovirt.trashnet.xyz).
2020-02-06 16:38:33,300Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b' was reported as Down on VDS 'c3465ca2-395e-4c0c-b72e-b5b7153df452'(node1.ovirt.trashnet.xyz)
2020-02-06 16:38:33,301Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-5) [] START, DestroyVDSCommand(HostName = node1.ovirt.trashnet.xyz, DestroyVmVDSCommandParameters:{hostId='c3465ca2-395e-4c0c-b72e-b5b7153df452', vmId='df9dbac4-35c0-40ee-acd4-a1cfc959aa8b', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 1f951ea9
2020-02-06 16:38:33,478Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (EE-ManagedThreadFactory-engineScheduled-Thread-8) [] Fetched 2 VMs from VDS 'c3465ca2-395e-4c0c-b72e-b5b7153df452'
2020-02-06 16:38:33,545Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-5) [] FINISH, DestroyVDSCommand, return: , log id: 1f951ea9
2020-02-06 16:38:33,546Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) moved from 'WaitForLaunch' --> 'Down'
2020-02-06 16:38:33,623Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-5) [] EVENT_ID: VM_DOWN_ERROR(119), VM yumcache is down with error. Exit message: internal error: qemu unexpectedly closed the monitor: [2020-02-06 16:38:31.723977] E [MSGID: 108006] [afr-common.c:5323:__afr_handle_child_down_event] 0-vmstore-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up.
[2020-02-06 16:38:31.724765] I [io-stats.c:4027:fini] 0-vmstore: io-stats translator unloaded
2020-02-06T16:38:32.573511Z qemu-kvm: -drive file=gluster://node1.fs.trashnet.xyz:24007/vmstore/781717e5-1cff-43a1-b586-9941503544e8/images/a1d56b14-6d72-4f46-a0aa-eb0870c36bc4/a2314816-7970-49ce-a80c-ab0d1cf17c78,file.debug=4,format=qcow2,if=none,id=drive-ua-a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,serial=a1d56b14-6d72-4f46-a0aa-eb0870c36bc4,werror=stop,rerror=stop,cache=none,discard=unmap,aio=native: Could not read qcow2 header: Invalid argument.
2020-02-06 16:38:33,624Z INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-5) [] add VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'(yumcache) to rerun treatment
2020-02-06 16:38:33,796Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-5) [] Rerun VM 'df9dbac4-35c0-40ee-acd4-a1cfc959aa8b'. Called from VDS 'node1.ovirt.trashnet.xyz'
2020-02-06 16:38:33,899Z WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-223) [] EVENT_ID: USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM yumcache on Host node1.ovirt.trashnet.xyz.
4 years, 9 months
hosted engine storage does not heal
by g.vasilopoulos@uoc.gr
Hello
We have a problem with hosted engine storage after updating one host which serves as a gluster server for the engine (the setup is gluster replica 3 with local disks from 3 hypervisors)
Volume heal command shows
[root@o5-car0118 engine]# gluster volume heal engine info
Brick o5-car0118.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
Status: Connected
Number of entries: 2
Brick o2-car0121.gfs-int.uoc.gr:/gluster_bricks/engine/engine
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/e3a3ef50-56b6-48b0-a9f8-2d6382e2286e/b6973d5b-cc7a-4abd-8d2d-94f551936a97.meta
/a1ecca6f-1487-492d-b9bf-d3fb60f1bc99/images/f5e576d4-eea7-431b-a0f0-f8a557006471/464518c5-f79c-4271-9d97-995981cde2cb.meta
Status: Connected
Number of entries: 2
Brick o9-car0114.gfs-int.uoc.gr:/gluster_bricks/engine/engine
Status: Connected
Number of entries: 0
On all the gluster servers
I notice that the affected directories have date in 1970.
[root@o5-car0118 images]# ls -al
σύνολο 24
drwxr-xr-x. 23 vdsm kvm 8192 Σεπ 24 12:07 .
drwxr-xr-x. 6 vdsm kvm 64 Σεπ 19 2018 ..
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 2bac658f-70ce-4adb-ab68-a0f0c205c70c
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 3034a69c-b5b5-46fa-a393-59ea46635142
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 5538ae6b-ccc6-4861-b71b-6b2c7af2e0ab
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 66dbce25-8863-42b5-904a-484f8e9c225a
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 6c049108-28f7-47d9-8d54-4ac2697dcba8
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 72702607-1896-420d-931a-42c9f01d37f1
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 7c617da4-ab6b-4791-80be-541f5be60dd8
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 902a16d3-6494-4840-a528-b49972f9c332
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 96fd6116-7983-4385-bca6-e6ca8edc94ca
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 abd875cd-96b6-47a6-b6a3-ae35300a21cc
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 add7bc92-1a40-474d-9255-53ac861b75ed
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 b7b06df7-465f-4fc7-a214-033b7dca6bc7
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 c0ecacac-26c6-40d9-87da-af17d9de8d21
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 c4d2d5da-2a15-4735-8919-324ae8372064
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 c7e0c784-bb8e-4024-95df-b6f4267b0208
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 d1f1ff5a-387d-442c-9240-1c58e4d6f8a7
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 d3e172cb-b6dd-4867-a9cd-f4fa006648bc
drwxr-xr-x. 2 vdsm kvm 8192 Ιαν 1 1970 e3a3ef50-56b6-48b0-a9f8-2d6382e2286e <-----
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 e477ec02-11ab-4d92-b5fd-44e91fbde7f9
drwxr-xr-x. 2 vdsm kvm 149 Αύγ 2 2019 e839485b-b0be-47f6-9847-b691e02ce9a4
drwxr-xr-x. 2 vdsm kvm 8192 Ιαν 1 1970 f5e576d4-eea7-431b-a0f0-f8a557006471 <-----
I think this has something to do with a gluster bug.
Is there a way to correct this and heal the volume?
Thank you!
4 years, 9 months
Re: Reimport disks
by Strahil Nikolov
On February 13, 2020 4:38:06 PM GMT+02:00, Robert Webb <rwebb(a)ropeguru.com> wrote:
>Off the top of my head, would you use the "Import Domain" option?
>
>________________________________________
>From: Christian Reiss <email(a)christian-reiss.de>
>Sent: Thursday, February 13, 2020 9:30 AM
>To: users
>Subject: [ovirt-users] Reimport disks
>
>Hey folks,
>
>I created a new cluster with a new engine, everything is green and
>running again (3 HCI, Gluster, this time Gluster 7.0 and CentOS7
>hosts).
>
>I do have a backup of the /images/ directory from the old installation.
>I tried copying (and preserving user/ permissions) into the new images
>gluster dir and trying a domain -> scan to no avail.
>
>What is the correct way to introduce oVirt to "new" (or unknown)
>images?
>
>-Chris.
>
>--
>with kind regards,
>mit freundlichen Gruessen,
>
>Christian Reiss
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/V7QZ5UZZYTY...
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q2TV6CGLZYB...
Create a temp storage, then detach and remove.
Manually copy the backup to the same directory .
Attach again the storage domain and there should be a 'Import VMs' & ' Import templates' tabs in that domain.
Then you just need to select VM, pick a cluster and import.
Best Regards,
Strqhil Nikolov
4 years, 9 months