Create new user, but why cannot login ?
by tommy
I just create a new user:
[root@oeng ~]# ovirt-aaa-jdbc-tool user add cuitao
adding user cuitao...
user added successfully
[root@oeng ~]# ovirt-aaa-jdbc-tool user password-reset cuitao
Password:
Reenter password:
updating user cuitao...
user updated successfully
[root@oeng ~]# ovirt-aaa-jdbc-tool user edit cuitao
--password-valid-to="2221-01-15 05:23:41Z"
updating user cuitao...
user updated successfully
[root@oeng ~]# ovirt-aaa-jdbc-tool user show cuitao
-- User cuitao(300163db-8352-4fbd-86ac-d25014364f08) --
Namespace: *
Name: cuitao
ID: 300163db-8352-4fbd-86ac-d25014364f08
Display Name:
Email: sz_cuitao(a)163.com <mailto:sz_cuitao@163.com>
First Name: tommy
Last Name: cui
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2021-01-15 05:23:41Z
Account Valid To: 2221-01-15 05:23:41Z
Account Without Password: false
Last successful Login At: 2021-01-15 05:54:49Z
Last unsuccessful Login At: 2021-01-15 05:32:12Z
Password Valid To: 2221-01-15 05:23:41Z
And I give VmCreator Role to the new account.
But why cannot login ?
3 years, 10 months
New oVirt Cluster and Managed block storage
by Shantur Rathore
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only
have one disk which I plan to partition and use for hyper converged setup.
As this is my first oVirt cluster I need help in understanding a few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only
Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pitfalls in such a setup?
Thanks for your help
Regards,
Shantur
3 years, 10 months
potential split-brain after upgrading Gluster version and rebooting one of three storage nodes
by user-5138@yandex.com
Hello everyone,
I'm running an oVirt cluster (4.3.10.4-1.el7) on a bunch of physical nodes with Centos 7.9.2009 and the Hosted Engine is running as a virtual machine on one of these nodes. As for the storage, I'm running GlusterFS 6.7 on three separate physical storage nodes (also Centos 7). Gluster itself has three different volumes of the type "Replicate" or "Distributed-Replicate".
I recently updated both the system packages and the GlusterFS version to 6.10 on the first storage node (storage1) and now I'm seeing a potential split-brain situation for one of the three volumes when running "gluster volume heal info":
Brick storage1:/data/glusterfs/nvme/brick1/brick
Status: Connected
Number of entries: 0
Brick storage2:/data/glusterfs/nvme/brick1/brick
/c32d664d-69ba-4c3f-8ea1-240133963815/dom_md/ids
/
/.shard/.remove_me
Status: Connected
Number of entries: 3
Brick storage3:/data/glusterfs/nvme/brick1/brick
/c32d664d-69ba-4c3f-8ea1-240133963815/dom_md/ids
/
/.shard/.remove_me
Status: Connected
Number of entries: 3
I checked the hashes and the "dom_md/ids" file has a different md5 on every node. Using the heal on the volume command doesn't do anything and the entries remain. The heal info for the other two volumes shows no entries.
The affected gluster volume (type: replicate) is mounted as a Storage Domain using the path "storage1:/nvme" inside of oVirt and is used to store the root partitions of all virtual machines, which were running at the time of the upgrade and reboot of storage1. The volume has three bricks, with one brick being stored on each storage node. For the upgrade process I followed the steps shown at https://docs.gluster.org/en/latest/Upgrade-Guide/generic-upgrade-procedure/. I stopped and killed all gluster related services, upgraded both system and gluster packages and rebooted storage1.
Is this a split brain situation and how can I solve this? I would be very grateful for any help.
Please let me know if you require any additional information.
Best regards
3 years, 10 months
cockpit-ovirt-dashboard
by Gary Pedretty
So with the latest updates to Ovirt and CentOS 8 Stream it seems that you either can not install cockpit-ovirt-dashboard, or if you do, it downgrades cockpit-bridge and cockpit-system. You then end up not showing up to date on these hosts on Open Virtualization Manager and can only get back to a fully updated datacenter by doing a yum update --allowerasing which then removes cockpit-ovirt-dashboard.
My main concern is now that the command line hosted-engine commands have been removed, if the hosted engine is not running for some reason you have no visibility into what is going on since the regular cockpit host interface will not show the virtualization features. You can't put a host in maintenance or start the engine manually.
example....
[root@ravn-kvm-1 admin]# yum install cockpit-ovirt-dashboard
//////snip///////
Installed products updated.
Downgraded:
cockpit-bridge-217-1.el8.x86_64 cockpit-system-217-1.el8.noarch
Installed:
cockpit-dashboard-217-1.el8.noarch cockpit-ovirt-dashboard-0.14.17-1.el8.noarch ovirt-host-4.4.1-4.el8.x86_64 ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
Complete!
[root@ravn-kvm-1 admin]# yum update
Last metadata expiration check: 3:08:22 ago on Tue 12 Jan 2021 01:00:15 PM AKST.
Error:
Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-bridge-234-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64
- cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64
Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch
Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed
- package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed
- package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch
- cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
- cannot install the best update candidate for package cockpit-system-217-1.el8.noarch
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
_______________________________
Gary Pedretty
IT Manager
Ravn Alaska
Office: 907-266-8451
Mobile: 907-388-2247
Email: gary.pedretty(a)ravnalaska.com
"We call Alaska......Home!"
Ravn Alaska
CONFIDENTIALITY NOTICE:
The information in this email may be confidential and/or privileged. This email is intended to be reviewed by only the individual or organization named above. If you are not the intended recipient or an authorized representative of the intended recipient, you are hereby notified that any review, dissemination, forwarding or copying of the email and its attachments, if any, or the information contained herein is prohibited. If you have received this email in error, please immediately notify the sender by return email and delete this email from your system. Thank you.
3 years, 10 months
how does host's status changes ? Especially Unassigned or NonOperational or NonResponsible
by lifuqiong@sunyainfo.com
Hi all,
I'm confusing in Host's status in Ovirt Eninge.
When will the host's status become Unassigned or NonOperational or NonResponsible? and If the host's status change to these statuses, What will ovirt response to it?
After reading Ovirt Engine's source code, I find Only HostMonitoring.java and AutoRecoveryManager.java will change the status of Host.
for example, If the host's status is changed to NonOperational, The AutoRecoveryManager will traverse the NonOperational hosts and calling ActivateVdsCommand.java, which will only set the Host's status to Unassigned ? But I don't know What's the next step?
So where I can find the article or manual or other helpful information about this question?
Thank you .
Your sincerely
Mark
3 years, 10 months
Re: Constantly XFS in memory corruption inside VMs
by Strahil Nikolov
Damn...
You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
Yes!
I have a live VM right now that will de dead on a reboot:
[root@kontainerscomk ~]# cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.3:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.3"
Red Hat Enterprise Linux release 8.3 (Ootpa)
Red Hat Enterprise Linux release 8.3 (Ootpa)
[root@kontainerscomk ~]# sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
Use -F to force a read attempt.
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0 -F
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
xfs_db: size check failed
xfs_db: V1 inodes unsupported. Please try an older xfsprogs.
[root@kontainerscomk ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Nov 19 22:40:39 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=ad84d1ea-c9cc-4b22-8338-d1a6b2c7d27e /boot xfs defaults 0 0
UUID=4642-2FF6 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/rhel-swap none swap defaults 0 0
Thanks,
-----Original Message-----
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, November 29, 2020 2:33 PM
To: Vinícius Ferrão <ferrao(a)versatushpc.com.br>
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Constantly XFS in memory corruption inside VMs
Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
Hi Strahil.
I’m not using barrier options on mount. It’s the default settings from CentOS install.
I have some additional findings, there’s a big number of discarded packages on the switch on the hypervisor interfaces.
Discards are OK as far as I know, I hope TCP handles this and do the proper retransmissions, but I ask if this may be related or not. Our storage is over NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue with iSCSI, not that I’m aware of.
In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 7.2 but there’s no corruption on the VMs there...
Thanks,
Sent from my iPhone
> On 29 Nov 2020, at 04:00, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> Are you using "nobarrier" mount options in the VM ?
>
> If yes, can you try to remove the "nobarrrier" option.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
>
>
>
>
>
> Hi Strahil,
>
> I moved a running VM to other host, rebooted and no corruption was found. If there's any corruption it may be silent corruption... I've cases where the VM was new, just installed, run dnf -y update to get the updated packages, rebooted, and boom XFS corruption. So perhaps the motion process isn't the one to blame.
>
> But, in fact, I remember when moving a VM that it went down during the process and when I rebooted it was corrupted. But this may not seems related. It perhaps was already in a inconsistent state.
>
> Anyway, here's the mount options:
>
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> The options are the default ones. I haven't changed anything when configuring this cluster.
>
> Thanks.
>
>
>
> -----Original Message-----
> From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> Sent: Saturday, November 28, 2020 1:54 PM
> To: users <users(a)ovirt.org>; Vinícius Ferrão
> <ferrao(a)versatushpc.com.br>
> Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside
> VMs
>
> Can you try with a test vm, if this happens after a Virtual Machine migration ?
>
> What are your mount options for the storage domain ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
>
>
>
>
>
>
>
>
> Hello,
>
>
>
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs.
>
>
>
> For random reasons VM’s gets corrupted, sometimes halting it or just being silent corrupted and after a reboot the system is unable to boot due to “corruption of in-memory data detected”. Sometimes the corrupted data are “all zeroes”, sometimes there’s data there. In extreme cases the XFS superblock 0 get’s corrupted and the system cannot even detect a XFS partition anymore since the magic XFS key is corrupted on the first blocks of the virtual disk.
>
>
>
> This is happening for a month now. We had to rollback some backups, and I don’t trust anymore on the state of the VMs.
>
>
>
> Using xfs_db I can see that some VM’s have corrupted superblocks but the VM is up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks in the machine will be gone, and that’s exactly what happened.
>
>
>
> Another day I was just installing a new CentOS 8 VM for random reasons, and after running dnf -y update and a reboot the VM was corrupted needing XFS repair. That was an extreme case.
>
>
>
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on the system. No errors logged on dmesg, nothing on /var/log/messages and no errors on the “zpools”, not even after scrub operations. On the switch, a Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are no “up and down” and zero errors on all interfaces (we have a 4x Port LACP on the TrueNAS side and 2x Port LACP on each hosts), everything seems to be fine. The only metric that I was unable to get is “dropped packages”, but I’m don’t know if this can be an issue or not.
>
>
>
> Finally, on oVirt, I can’t find anything either. I looked on /var/log/messages and /var/log/sanlock.log but there’s nothing that I found suspicious.
>
>
>
> Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS 7/8 with XFS, there’s 3 Windows VM’s that does not seems to be affected, everything else is affected.
>
>
>
> Thanks all.
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HC
> FNWTWFZZTL2EJHV36OENHUGB/
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ5E55LJMA7...
3 years, 10 months
Using Ansible ovirt_vm
by Matthew.Stier@fujitsu.com
Over this weekend, I need to shutdown and restart 1500 VMs while I do work. Modifying an Ansible playbook I have, to accomplish that, seems easy enough.
Last weekend I used the web gui and edited each Virtual Machine and changed the 'cluster' to the one I wanted to assign it too. Now I would like to automate this with Ansible.
In between shutting them down and starting them up, I need to change the cluster registration of quite a few of the hosts, and I have concerns on how to do that.
The Ansible ovirt_vm documentation on this is a bit fuzzy. I'm assuming it for assigning the cluster, but in some examples it appears to be used as a filter. Which changes the meaning. Is it "start the VM in this cluster" or "start this VM if it is in this cluster".
The playbook I'm working from is straight forward and has three tasks.
First, an ovirt_auth task to log into the SHE.
Second, an ovirt_vm task which loops though the group of virtualmachines, defined in an 'ini' file.
Third, an ovirt_auth task to logout from the SHE.
For stopping, I assume I specify a name, and state to 'stopped' and that should initiate the shutdown.
For starting, I assume I should set the state to 'running'.
For changing the cluster a vm is assigned to, do I simply need to use the 'cluster' parameter with a value of new cluster? Are there other actions that need to be taken? The VM need to be un and re-registered for the change to take effect? I'd like to make the cluster change individual playbook, but could it be added to the start or stop playbook. (ie: stop and change cluster, start on new cluster)
Waiting for the wisdom of the community.
3 years, 10 months
Re: VM console does not work with new cluster.
by tommy
Cannot connect even closed firewalld service on hosts.
-----Original Message-----
From: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> On Behalf Of Matthew.Stier(a)fujitsu.com
Sent: Thursday, January 14, 2021 11:20 PM
To: Strahil Nikolov <hunter86_bg(a)yahoo.com>; tommy <sz_cuitao(a)163.com>; eevans(a)digitaldatatechs.com
Cc: users(a)ovirt.org
Subject: [ovirt-users] Re: VM console does not work with new cluster.
Listed in firewalld service 'vdsm'
-----Original Message-----
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Wednesday, January 13, 2021 10:52 PM
To: tommy <sz_cuitao(a)163.com>; Stier, Matthew <Matthew.Stier(a)fujitsu.com>; eevans(a)digitaldatatechs.com
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Re: VM console does not work with new cluster.
I don't see the VNC ports at all (5900 and above).
Here is my firewall
on oVirt 4.3.10:
public (active)
target: default
icmp-block-inversion: no
interfaces: enp4s0 enp5s0f0 enp5s0f1 enp7s5f0 enp7s5f1 enp7s6f0
enp7s6f1 ovirtmgmt team0
sources:
services: cockpit ctdb dhcpv6-client glusterfs libvirt-tls nfs nfs3 nrpe ovirt-imageio ovirt-storageconsole ovirt-vmconsole rpc-bind samba snmp ssh vdsm
ports: 111/tcp 2049/tcp 54321/tcp 5900/tcp 5900-6923/tcp 5666/tcp 16514/tcp 54322/tcp 22/tcp 6081/udp 8080/tcp 963/udp 965/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Best Regards,
Strahil Nikolov
В 05:25 +0800 на 13.01.2021 (ср), tommy написа:
> I encountered the question too.
>
> The follow file is the connect file for vm that can connect using
> remote viewer:
>
> [virt-viewer]
> type=vnc
> host=192.168.10.41
> port=5900
> password=rdXQA4zr/UAY
> # Password is valid for 120 seconds.
> delete-this-file=1
> fullscreen=0
> title=HostedEngine:%d
> toggle-fullscreen=shift+f11
> release-cursor=shift+f12
> secure-attention=ctrl+alt+end
> versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0-
> 6;rhel6:99.0-1
> newer-version-url=
> http://www.ovirt.org/documentation/admin-guide/virt/console-client-res
> ources
>
> [ovirt]
> host=ooeng.tltd.com:443
> vm-guid=76f99df2-ef79-45d9-8eea-a32b168f9ef3
> sso-token=4Up7TfLLBjSuQgPkQvRz3D-
> fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esctm3Im4tz80mE1DjrNT3XQ
> admin=1
> ca=-----BEGIN CERTIFICATE-----
> \nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxETA
> PBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAeFw
> 0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDV
> QQKDAh0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0G
> CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/J
> mKeJAUVZqNKRg1q80IpWyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44
> ysvCWfU0SBk/ZPnKmQ58o5MlSkidHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqw
> LrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLIffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DK
> E4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLMwohz7wgtgsDA36277mujNyMjMbrSF
> heu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAAGjga0wgaowHQYDVR0OBBYE
> FDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOMqN8+QhCP7DAkqF3
> RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHTAbBgNVBA
> MMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdD
> wEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmj
> rLZGyh5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjd
> rHvhfbZFnZsGe2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4
> pHS04c2+4JMhpe+hxgsO2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nV
> Ww+QNXo5islWUXqeDc3RcnW3kq0XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/j
> rDafZn\npTs0KJFNgeVnUhKanB29ONy+tmnUmTAgPMaKKw==\n-----END
> CERTIFICATE-----\n
>
> the firewall list of the host 192.168.10.41 is:
>
> [root@ooengh1 ~]# firewall-cmd --list-all public (active)
> target: default
> icmp-block-inversion: no
> interfaces: bond0 ovirtmgmt
> sources:
> services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-
> vmconsole snmp ssh vdsm
> ports: 6900/tcp 22/tcp 6081/udp
> protocols:
> masquerade: no
> forward-ports:
> source-ports:
> icmp-blocks:
> rich rules:
>
>
>
>
>
>
>
> the follow file is the connect file that vm that cannot connect using
> remote viewer:
>
> [virt-viewer]
> type=vnc
> host=ohost1.tltd.com
> port=5900
> password=4/jWA+RLaSZe
> # Password is valid for 120 seconds.
> delete-this-file=1
> fullscreen=0
> title=testol:%d
> toggle-fullscreen=shift+f11
> release-cursor=shift+f12
> secure-attention=ctrl+alt+end
> versions=rhev-win64:2.0-160;rhev-win32:2.0-160;rhel8:7.0-3;rhel7:2.0-
> 6;rhel6:99.0-1
> newer-version-url=
> http://www.ovirt.org/documentation/admin-guide/virt/console-client-res
> ources
>
> [ovirt]
> host=ooeng.tltd.com:443
> vm-guid=2b0eeecf-e561-4f60-b16d-dccddfcc852a
> sso-token=4Up7TfLLBjSuQgPkQvRz3D-
> fUGZWZg4ynApe2Y7ylkARCFwQWsfEr3dU8FjlK8esctm3Im4tz80mE1DjrNT3XQ
> admin=1
> ca=-----BEGIN CERTIFICATE-----
> \nMIIDqDCCApCgAwIBAgICEAAwDQYJKoZIhvcNAQELBQAwPzELMAkGA1UEBhMCVVMxETA
> PBgNVBAoM\nCHRsdGQuY29tMR0wGwYDVQQDDBRvb2VuZy50bHRkLmNvbS4xNzczMDAeFw
> 0yMTAxMTAxNjE1NDda\nFw0zMTAxMDkxNjE1NDdaMD8xCzAJBgNVBAYTAlVTMREwDwYDV
> QQKDAh0bHRkLmNvbTEdMBsGA1UE\nAwwUb29lbmcudGx0ZC5jb20uMTc3MzAwggEiMA0G
> CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCg\nYT9S7hWiXQUzAqFQKbg2nMjwyHDmb/J
> mKeJAUVZqNKRg1q80IpWyoM12Zw0nX1eTwMnVY/JtJON4\n13PoEC3So8nniGt+wtHr44
> ysvCWfU0SBk/ZPnKmQ58o5MlSkidHwySChXfVPYLPWeUJ1JUrujna/\nCbi5bmmjx2pqw
> LrZXX8Q5NO2MRKOTs0Dtg16Q6z+a3cXLIffVJfhPGS3AkIh6nznNaDeH5gFZZbd\nr3DK
> E4xrpdw/7y6CgjmHe4vwGxOIyE+gElZ/lVtqznLMwohz7wgtgsDA36277mujNyMjMbrSF
> heu\n5WfbIa9VVSZWEkISVq6eswLOQ1IRaFyJsFN9AgMBAAGjga0wgaowHQYDVR0OBBYE
> FDYEqJOMqN8+\nQhCP7DAkqF3RZMFdMGgGA1UdIwRhMF+AFDYEqJOMqN8+QhCP7DAkqF3
> RZMFdoUOkQTA/MQswCQYD\nVQQGEwJVUzERMA8GA1UECgwIdGx0ZC5jb20xHTAbBgNVBA
> MMFG9vZW5nLnRsdGQuY29tLjE3NzMw\nggIQADAPBgNVHRMBAf8EBTADAQH/MA4GA1UdD
> wEB/wQEAwIBBjANBgkqhkiG9w0BAQsFAAOCAQEA\nAKs0/yQWkoOkGcL0PjF9ijekdMmj
> rLZGyh5uLot7h9s/Y2+5l9n9IzEjjx9chi8xwt6MBsR6/nBT\n/skcciv2veM22HwNGjd
> rHvhfbZFnZsGe2TU60kGzKjlv1En/8Pgd2aWBcwTlr+SErBXkehNEJRj9\n1saycPgwS4
> pHS04c2+4JMhpe+hxgsO2+N/SYkP95Lf7ZQynVsN/SKx7X3cWybErCqoB7G7McqaHN\nV
> Ww+QNXo5islWUXqeDc3RcnW3kq0XUEzEtp6hoeRcLKO99QrAW31zqU/QY+EeZ6Fax1O/j
> rDafZn\npTs0KJFNgeVnUhKanB29ONy+tmnUmTAgPMaKKw==\n-----END
> CERTIFICATE-----\n
>
>
> the firewall list of the host ohost1.tltd.com(192.168.10.160) is:
>
> [root@ohost1 ~]# firewall-cmd --list-all public (active)
> target: default
> icmp-block-inversion: no
> interfaces: bond0 ovirtmgmt
> sources:
> services: cockpit dhcpv6-client libvirt-tls ovirt-imageio ovirt-
> vmconsole snmp ssh vdsm
> ports: 22/tcp 6081/udp
> protocols:
> masquerade: no
> forward-ports:
> source-ports:
> icmp-blocks:
> rich rules:
>
>
> Please give me some advice,thanks.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: users-bounces(a)ovirt.org <users-bounces(a)ovirt.org> On Behalf Of
> Strahil Nikolov via Users
> Sent: Wednesday, January 13, 2021 3:15 AM
> To: Matthew.Stier(a)fujitsu.com; eevans(a)digitaldatatechs.com;
> users(a)ovirt.org
> Subject: [ovirt-users] Re: VM console does not work with new cluster.
>
>
> > It’s just that once the VM has been moved to the new cluster,
> > selecting console results in the same behavior, but that virt-
> > viewer starts and stops within a second.
>
> In order to debug, you will need to compare the files provided when
> you press the "console" button from both clusters and identify the
> problem.
>
> Have you compared the firewalld ports on 2 nodes (old and new
> cluster) ?
>
> Best Regards,
> Strahil Nikolov
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3U5ZIELT
> USPKT6KZ7UZWWFCDRNCF5YLN/
>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N7JRNNAJDZH...
3 years, 10 months
Networking question on setting up self-hosting engine
by Kim Yuan Lee
Dear Ovirt list,
I saw some failures in /var/log/ovirt-hosted-engine-setup such as:
1) Install oVirt Engine packages for restoring backup
2) Copy engine logs etc
But I'm not sure what is the root cause for the setup to fail?
Attached is the log file for your reference. Could you please advise?
Thanks.
> On Thursday, 14 January 1:45 a.m., <Yedidyah Bar David> wrote:
>> Hi,
>>
>> On Wed, Jan 13, 2021 at 7:16 PM <netgoogling(a)gmail.com> wrote:
...
<https://lists.ovirt.org/archives/list/users@ovirt.org/thread/X5WRCCSULCNW...>
Dear list, I have tried setting up self-hosting engine on a host with
ONE Nic (Ovirt 4.4 CentOS 8 Stream). I followed the Quick Start Guide,
and tried the command line self-host setup, but ended up with the
following error: {u'msg': u'There was a failure deploying the engine on
the local engine VM. The system may not be provisioned according to the
playbook results.... I tried on another host with TWO NICs (Ovirt 4.3
Oracle Linux 7 Update 9). This time I setup a bridge BR0 and disable EM1
(the first Ethernet interface on the host), and then created Bond0
on-top of BR0. Both Bond0 and EM2 (the second Ethernet interface on the
host) were up. And then I tried again using Ovirt-Cockpit wizard, with
the Engine VM set on BR0, and the deployment of Engine VM simply failed.
The Engine and Host are using the same network numbers (192.168.2.0/24)
and they resolved correctly. I read the logs in
var/log/ovirt-engine/engine.log but there wasn't any error reported. I
have already tried many times for the past few days and I'm at my
wits-end. May I know: 1) Is it possible to install Self-hosted engine
with just ONE NIC?
>> Generally speaking, yes.
...
<https://lists.ovirt.org/archives/list/users@ovirt.org/thread/X5WRCCSULCNW...>
2) Any suggestion how to troubleshoot these problems? And tested network
configurations?
>>
>> Please check/share all relevant logs. If unsure, all of /var/log.
>> Specifically, /var/log/ovirt-hosted-engine-setup/* (also engine-logs
>> subdirs, if relevant) and /var/log/vdsm/* on the hosts.
>>
>> Good luck and best regards,
Regards, Lee.
3 years, 10 months