No valid network interface has been found
by naiajae@icloud.com
Hey im kinda new to ovirt im getting this error when i try to create a "hosted engine" i hope this is where i can get help
"FULL MSG"
No valid network interface has been found
If you are using Bonds or VLANs Use the following naming conventions:
- VLAN interfaces: physical_device.VLAN_ID (for example, eth0.23, eth1.128, enp3s0.50)
- Bond interfaces: bond*number* (for example, bond0, bond1)
- VLANs on bond interfaces: bond*number*.VLAN_ID (for example, bond0.50, bond1.128)
* Supported bond modes: active-backup, balance-xor, broadcast, 802.3ad
* Networking teaming is not supported and will cause errors
2 years, 2 months
Cannot deserialize - engine doesn't start
by Giulio Casella
Hi guys,
I'm in big troubles.
Since last night my ovirt engine is unavailable (won't correctly start).
Process ovirt-engine is running, and systemd doesn't complain (systemctl
status is ok).
Digging in engine.log I can find an error:
2022-07-13 09:57:48,314+02 ERROR
[org.ovirt.engine.core.utils.serialization.json.JsonObjectDeserializer]
(ServerService Thread Pool -- 45) [] Cannot deserialize {
"@class" :
"org.ovirt.engine.core.common.action.CreateSnapshotDiskParameters",
"commandId" : [ "org.ovirt.engine.core.compat.Guid", {
"uuid" : "6ae544f6-b608-4d8d-9f99-eabd5d5db0ad"
} ],
[...cut...]
"domain" : "my.dom.ain"[truncated 5971 chars]; line: 72, column: 89]
(through reference chain:
org.ovirt.engine.core.common.action.CreateSnapshotDiskParameters["diskImagesMap"])
2022-07-13 09:57:48,315+02 ERROR
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean]
(ServerService Thread Pool -- 45) [] Failed to initialize backend:
org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke
public void
org.ovirt.engine.core.bll.tasks.CommandContextsCacheImpl.initContextsMap()
on org.ovirt.engine.core.bll.tasks.CommandContextsCacheImpl@3f52ccce
[...cut...]
During the night there was a job that was moving a disk (2.5TB) from a
storage domain to another. I think it's related.
Any ideas?
TIA,
gc
2 years, 2 months
oVirt 4.5.1 Hyperconverge Gluster install fails
by david.lennox@frontlinedigital.com.au
Hello all,
I am hoping someone can help me with a oVirt installation that has just gotten the better of me after weeks of trying.
After setting up ssh-keys and making sure each host is known to the primary host (sr-svr04), I go though Cockpit and "Configure Gluster storage and oVirt hosted engine", enter all of the details with <host>.san.lennoxconsulting.com.au for the storage network FQDN and <host>.core.lennoxconsulting.com.au for the public interfaces. Connectivity on each of the VLAN's test out as basically working (everything is pingable and ssh connections work) and the hosts are generally usable on the network.
But the install ultimately dies with the following ansible error:
----------- gluster-deployment.log ---------
:
:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:63
failed: [sr-svr04.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], "delta": "0:00:00.058528", "end": "2022-07-16 16:18:37.018563", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": "2022-07-16 16:18:36.960035", "stderr": " A volume group called gluster_vg_sdb already exists.", "stderr_lines": [" A volume group called gluster_vg_sdb already exists."], "stdout": "", "stdout_lines": []}
failed: [sr-svr05.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], "delta": "0:00:00.057186", "end": "2022-07-16 16:18:37.784063", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": "2022-07-16 16:18:37.726877", "stderr": " A volume group called gluster_vg_sdb already exists.", "stderr_lines": [" A volume group called gluster_vg_sdb already exists."], "stdout": "", "stdout_lines": []}
failed: [sr-svr06.san.lennoxconsulting.com.au] (item={'key': 'gluster_vg_sdb', 'value': [{'vgname': 'gluster_vg_sdb', 'pvname': '/dev/sdb'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "1536K", "-s", "1536K", "gluster_vg_sdb", "/dev/sdb"], "delta": "0:00:00.062212", "end": "2022-07-16 16:18:37.250371", "item": {"key": "gluster_vg_sdb", "value": [{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "non-zero return code", "rc": 5, "start": "2022-07-16 16:18:37.188159", "stderr": " A volume group called gluster_vg_sdb already exists.", "stderr_lines": [" A volume group called gluster_vg_sdb already exists."], "stdout": "", "stdout_lines": []}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
sr-svr04.san.lennoxconsulting.com.au : ok=32 changed=13 unreachable=0 failed=1 skipped=27 rescued=0 ignored=1
sr-svr05.san.lennoxconsulting.com.au : ok=31 changed=12 unreachable=0 failed=1 skipped=27 rescued=0 ignored=1
sr-svr06.san.lennoxconsulting.com.au : ok=31 changed=12 unreachable=0 failed=1 skipped=27 rescued=0 ignored=1
----------- gluster-deployment.log ---------
A "gluster v status" gives me no volumes present and that is where I am stuck! Any ideas of what I start trying next?
I have tried this with oVirt Node 4.5.1 el8 and el9 images as well as 4.5 el8 images so it has got to be somewhere in my infrastructure configuration but I am out of ideas.
My hardware configuration is 3 x HP DL360's with oVirt Node 4.5.1 el8 installed on 2x146gb RAID1 array and a gluster 6x900gb RAID5 array.
Network configuration is:
root@sr-svr04 ~]# ip addr show up
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff
3: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff permaddr 1c:98:ec:29:41:69
4: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 1c:98:ec:29:41:6a brd ff:ff:ff:ff:ff:ff
5: eno4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 1c:98:ec:29:41:6b brd ff:ff:ff:ff:ff:ff
6: eno3.3@eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1c:98:ec:29:41:6a brd ff:ff:ff:ff:ff:ff
inet 192.168.3.11/24 brd 192.168.3.255 scope global noprefixroute eno3.3
valid_lft forever preferred_lft forever
inet6 fe80::ec:d7be:760e:8eda/64 scope link noprefixroute
valid_lft forever preferred_lft forever
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff
8: bond0.4@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1c:98:ec:29:41:68 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.11/24 brd 192.168.4.255 scope global noprefixroute bond0.4
valid_lft forever preferred_lft forever
inet6 fe80::f503:a54:8421:ea8b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
9: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:5b:fd:ac brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
The public network is core.lennoxconsulting.com.au which is 192.168.4.0/24 and the storage network is san.lennoxconsulting.com.au which is 192.168.3.0/24
Any help to move forward please is appreciated.
- Dave.
2 years, 2 months
Engine storage Domain path change
by michael@wiswm.com
Hello
I installed oVirt on 3 servers(hv1,hv2,hv3) with Self Hosted Engine couple years ago. Gluster is used as storage for VMs. The engine has its own storage volume.
Then I added 3 more servers(hv4,hv5,hv6).
Now I would like to replace the first 3 servers. I added 3 more servers(hv7,hv8,hv9). I created new gluster volumes on hv7-hv9, moved disks from old volumes.
Now the question is how to migrate engine storage? I decided to do it on Gluster level by replacing each of the brick from the old servers(hv1-hv3) with new ones (hv7-hv9). I successfully replaced bricks from hv1 and hv2 to hv7 and hv8. InoVirt engine storage Domain is created with the path hv3:/engine. I am afraid to replace hv3 brick with hv9, to not break the HostedEngine VM. To change storage Domain path I need to move it to Maintenance, but it will unmount it from all the hosts and HostedEngine will stop working. Is there any otherway to change a storage Domain path?
I already changed storage value in /etc/ovirt-hosted-engine/hosted-engine.conf. I did something like this:
# on hv7
hosted-engine --vm-shutdown
# on all hosts
systemctl stop ovirt-ha-agent
systemctl stop ovirt-ha-broker
hosted-engine --disconnect-storage
sed -i 's/hv3/hv7/g' /etc/ovirt-hosted-engine/hosted-engine.conf
hosted-engine --connect-storage
systemctl restart ovirt-ha-broker
systemctl status ovirt-ha-broker
systemctl restart ovirt-ha-agent
# on hv7
hosted-engine --vm-start
and now the Hosted Engine is up and running and Gluster engine volume is mounted like hv7:/engine
So the main question remains - how to change the engine storage Domain path?
2 years, 2 months
Stuck in Manager upgrade. Can't set Cluster to maintenance mode.
by johannes.lutz@diamontech.de
Hi Folks,
like many others, the ovirt hosted engine certificates expired on my installation. We tried to follow this knowledge base article: https://access.redhat.com/solutions/6865861
I set the host which runs the hosted engine via the "hosted-engine --set-maintenance --mode=global" command into the global maintenance mode.
Then i try to execute the "engine-setup --offline" command. There we anwser all questions and the script recognizes the expired certificates. But when we try to execute the last step it aborts with following error message.
Output of engine-setup --offline:
[WARNING] Failed to read or parse '/etc/pki/ovirt-engine/keys/apache.p12'
Perhaps it was changed since last Setup.
Error was:
Mac verify error: invalid password?
One or more of the certificates should be renewed, because they expire soon, or include an invalid expiry date, or they were created with validity period longer than 398 days, or do not include the subjectAltName extension, which can cause them to be rejected by recent browsers and up to date hosts.
See https://www.ovirt.org/develop/release-management/features/infra/pki-renew/ for more details.
Renew certificates? (Yes, No) [No]: Yes
--== APACHE CONFIGURATION ==--
--== SYSTEM CONFIGURATION ==--
--== MISC CONFIGURATION ==--
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
During execution engine service will be stopped (OK, Cancel) [OK]: Ok
[ ERROR ] It seems that you are running your engine inside of the hosted-engine VM and are not in "Global Maintenance" mode.
In that case you should put the system into the "Global Maintenance" mode before running engine-setup, or the hosted-engine HA agent might kill the machine, which might corrupt your data.
[ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup detected, but Global Maintenance is not set.
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220701205812-yu1osl.log
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220701205843-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Any ideas how to get the hosted engine into global maintenance mode?
Thanks for your help in advance!
Best Regard
J. Lutz
2 years, 2 months
Not able to login as admin after successfull deployment of hosted engine (OVirt 4.5.1)
by Ralf Schenk
Hello List,
I successfully deployed a fresh hosted-engine, but I'm not able to login
to Administration-Portal. I'm perferctly sure about the password I had
to type multiple times....
I'm running ovirt-node-ng-4.5.1-0.20220622.0 and deployed engine via
cli-based ovirt-hosted-engine-setup.
Neither "admin" nor "admin@internal" are working (A profile cannot be
choosen as in earlier versions).
I can login to the monitoring part (grafana !) and also Cockpit but not
Administration-Portal nor VM-Portal.
I can ssh into the engine and lookup the user-database which has the user.
root@engine02 ~]# ovirt-aaa-jdbc-tool query --what=user
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
-- User admin(2be16cf0-5eb7-4b0e-923e-7bdc7bc2aa6f) --
Namespace: *
Name: admin
ID: 2be16cf0-5eb7-4b0e-923e-7bdc7bc2aa6f
Display Name:
Email: root@localhost
First Name: admin
Last Name:
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2022-07-15 18:23:47Z
Account Valid To: 2222-07-15 18:23:47Z
Account Without Password: false
Last successful Login At: 1970-01-01 00:00:00Z
Last unsuccessful Login At: 1970-01-01 00:00:00Z
Password Valid To: 2222-05-28 18:23:49Z
However no groups by default ???
[root@engine02 ~]# ovirt-aaa-jdbc-tool query --what=group
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Any solution ? I don't want to repeat the hosted-engine deployment a
fourth time after I mastered all problems with NFS permissions, GUI
deployment not accepting my Bond which is perfectly ok called "bond0"
etc....
Bye
--
Databay AG Logo
*Ralf Schenk
*
fon: 02405 / 40 83 70
mail: rs(a)databay.de
web: www.databay.de <https://www.databay.de>
Databay AG
Jens-Otto-Krag-Str. 11
52146 Würselen
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Dr. Jan Scholzen
2 years, 2 months
Q: oVirt 4.4.10 and vdsm-jsonrpc-java (Internal server error 500 after engine-setup)
by Andrei Verovski
Hi,
Finally I managed to migrate 4.4.7 to fresh installation of 4.4.10.
However, after successful engine-setup I’ve got 500 - Internal Server Error
I found this:
https://bugzilla.redhat.com/show_bug.cgi?id=1918022
Bug 1918022 - oVirt Manager is not loading after engine-setup
Article suggested to downgrade vdsm-jsonrpc-java to 1.5.x.
However, this is not possible:
dnf --showduplicates list vdsm-jsonrpc-java
dnf install vdsm-jsonrpc-java-1.5.7-1.el8
Last metadata expiration check: 0:39:52 ago on Fri 15 Jul 2022 02:32:36 PM EEST.
Error:
Problem: problem with installed package ovirt-engine-backend-4.4.10.7-1.el8.noarch
- package ovirt-engine-backend-4.4.10.7-1.el8.noarch requires vdsm-jsonrpc-java >= 1.6.0, but none of the providers can be installed
- cannot install both vdsm-jsonrpc-java-1.5.7-1.el8.noarch and vdsm-jsonrpc-java-1.6.0-1.el8.noarch
- cannot install both vdsm-jsonrpc-java-1.6.0-1.el8.noarch and vdsm-jsonrpc-java-1.5.7-1.el8.noarch
How to fix this?
Thanks in advance.
***************** SERVER LOG *********************
2022-07-15 14:45:44,969+03 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START" => "java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@648487d3
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@648487d3
Caused by: java.lang.reflect.InvocationTargetException
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - no procedure/function/signature for 'gettagsbyparent_id'"}}
2022-07-15 14:45:44,981+03 INFO [org.jboss.as.server] (ServerService Thread Pool -- 25) WFLYSRV0010: Deployed "restapi.war" (runtime-name : "restapi.war")
2022-07-15 14:45:44,982+03 INFO [org.jboss.as.server] (ServerService Thread Pool -- 25) WFLYSRV0010: Deployed "engine.ear" (runtime-name : "engine.ear")
2022-07-15 14:45:44,982+03 INFO [org.jboss.as.server] (ServerService Thread Pool -- 25) WFLYSRV0010: Deployed "apidoc.war" (runtime-name : "apidoc.war")
2022-07-15 14:45:44,982+03 INFO [org.jboss.as.server] (ServerService Thread Pool -- 25) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name : "ovirt-web-ui.war")
2022-07-15 14:45:45,015+03 INFO [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183: Service status report
WFLYCTL0186: Services which failed to start: service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
WFLYCTL0448: 2 additional services are down due to their dependencies being missing or failed
2 years, 2 months
Seeking best performance on oVirt cluster
by David Johnson
Good morning all,
I am trying to get the best performance out of my cluster possible,
Here are the details of what I have now:
Ovirt version: 4.4.10.7-1.el8
Bare metal for the ovirt engine
two hosts
TrueNAS cluster storage
1 NFS share
3 vdevs, 6 drives in raidz2 in each vdev
2 nvme drives for silog
Storage network is 10 GBit all static IP addresses
Tonight, I built a new VM from a template. It had 5 attached disks
totalling 100 GB. It took 30 minutes to deploy the new VM from the
template.
Global utilization was 9%.
The SPM has 50% of its memory free and never showed more than 12% network
utilization
62 out of 65 TB are available on the newly created NFS backing store (no
fragmentation). The TureNAS system is probably overprovisioned for our use.
There were peak throughputs of up to 4 GBytes/second (on a 10 GBit
network), but overall throughput on the NAS and the network were low.
ARC hits were 95 to 100%
L2 hits were 0 to 70%
Here's the NFS usage stats:
[image: image.png]
I believe the first peak is where the silog buffered the initial burst of
instructions, followed by sustained IO as the VM volumes were built in
parallel, and then finally tapering off to the one 50 GB volume that took
40 minutes to copy.
The indications of the NFS stats graph are that the network performance is
just fine.
Here are the disk IO stats covering the same time frame, plus a bit before
to show a spike IO:
[image: image.png]
The spike at 2250 (10 minutes before I started building my VM) shows that
the spinners actually hit write speed of almost 20 MBytes per second
briefly, then settled in at a sustained 3 to 4 MBytes per second. The
silog absorbs several spikes, but remains mostly idle, with activity
measured in kilobytes per second.
The HGST HUS726060AL5210 drives boast a spike throughput of 12 GB/S, and
sustained throughput of 227 Mbps.
------
Now to the questions:
1. Am I asking the on the right list? Does this look like something where
tuning ovirt might make a difference, or is this more likely a
configuration issue with my storage appliances?
2. Am I expecting too much? Is this well within the bounds of acceptable
(expected) performance?
3. How would I go about identifying the bottleneck, should I need to dig
deeper?
Thanks,
David Johnson
2 years, 2 months
unable to create iso domain
by Moritz Baumann
Hi
I have removed the iso domain of an existing data center, and now I am
unable to create a new iso domain
/var/log/ovirt-engine/engine.log shows:
2022-07-14 08:04:40,684+02 INFO
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Lock Acquired
to object
'EngineLock:{exclusiveLocks='[ovirt.storage.inf.ethz.ch:/export/ovirt/iso=STORAGE_CONNECTION]',
sharedLocks=''}'
2022-07-14 08:04:40,689+02 WARN
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Validation of
action 'AddStorageServerConnection' failed for user
xxx@ethz.ch(a)ethz.ch-authz. Reasons:
VAR__ACTION__ADD,VAR__TYPE__STORAGE__CONNECTION,$connectionId
c39c64ef-fb8b-4e87-9803-420c7fb2dd4a,$storageDomainName
,ACTION_TYPE_FAILED_STORAGE_CONNECTION_ALREADY_EXISTS
2022-07-14 08:04:40,690+02 INFO
[org.ovirt.engine.core.bll.storage.connection.AddStorageServerConnectionCommand]
(default task-34) [8db814e3-43ab-4921-ad35-2b3acd51c385] Lock freed to
object
'EngineLock:{exclusiveLocks='[ovirt.scratch.inf.ethz.ch:/export/ovirt/iso=STORAGE_CONNECTION]',
sharedLocks=''}'
2022-07-14 08:04:40,756+02 INFO
[org.ovirt.engine.core.bll.storage.connection.DisconnectStorageServerConnectionCommand]
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] Running
command: DisconnectStorageServerConnectionCommand internal: false.
Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2022-07-14 08:04:40,756+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] START,
DisconnectStorageServerVDSCommand(HostName = ovirt-node01,
StorageServerConnectionManagementVDSParameters:{hostId='d942c8fe-9a0c-4761-9be2-2f88b622070b',
storagePoolId='00000000-0000-0000-0000-000000000000', storageType='NFS',
connectionList='[StorageServerConnections:{id='null',
connection='ovirt.storage.inf.ethz.ch:/export/ovirt/iso', iqn='null',
vfsType='null', mountOptions='null', nfsVersion='null',
nfsRetrans='null', nfsTimeo='null', iface='null',
netIfaceName='null'}]', sendNetworkEventOnFailure='true'}), log id: 3043bbfd
2022-07-14 08:04:43,017+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-34) [4148e0fd-58ae-4375-8dc8-a08f47402ed6] FINISH,
DisconnectStorageServerVDSCommand, return:
{00000000-0000-0000-0000-000000000000=100}, log id: 3043bbfd
[root@ovirt-engine ovirt-engine]# showmount -e ovirt.storage.inf.ethz.ch
| grep ovirt
Export list for ovirt.scratch.inf.ethz.ch:
/export/ovirt/export @ovirt-storage
/export/ovirt/data @ovirt-storage
/export/ovirt/iso @ovirt-storage
the other two domains still work just fine and the netgroup contains all
ovirt-nodes.
storage-node1[0]:/export/ovirt/iso# ls -la
total 2
drwx------. 2 vdsm kvm 2 Jul 14 07:58 .
drwxr-xr-x. 5 root root 5 Aug 19 2020 ..
storage-node1[0]:/export/ovirt/iso# df .
Filesystem 1K-blocks Used Available Use% Mounted on
fs1/ovirt/iso 524288000 256 524287744 1% /export/ovirt/iso
storage-node1[0]:/export/ovirt/iso#
storage-node1[0]:/export/ovirt/iso# exportfs -v | grep ovirt/ -A1
/export/ovirt/iso
@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215812,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash)
/export/ovirt/data
@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215811,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash)
--
/export/ovirt/export
@ovirt-storage(sync,wdelay,hide,no_subtree_check,fsid=215813,sec=sys:krb5:krb5i:krb5p,rw,secure,root_squash,no_all_squash)
It appears that there is stille some reference to an iso domain
(c39c64ef-fb8b-4e87-9803-420c7fb2dd4a ??) in the database. How can I get
rid of it ?
Best
Moritz
2 years, 2 months