cannot manually migrate vm's
by eevans@digitaldatatechs.com
I upgraded from 4.3.8 to 4.3.9. Before the migration, I could manually migrate vm's and also it would automatically load balance the hosts. Now I cannot manually migrate and I have one server with no vm's and 2 with several.
If I put a host into maintenance mode, it will migrate the vm's off to other hosts, but before I noticed it would move vm's around to load balance and now it does not.
Not sure if this is a bug or not.
How it happens:
When I right click on a vm and click migrate, the migrate screen flashes on screen then disappears. Same behavior if I highlight the vm and click the migrate button at the top of the vm screen.
It's not critical, but something that needs corrected.
Any help or advice is very much appreciated.
Thanks.
4 years, 11 months
Cinderlib db not contained in engine-backup
by Thomas Klute
Dear oVirt users,
I just noticed that the ovirt_cinderlib database does not seem to be
contained in the archive file created by engine-backup.
Is it missing there?
Or is there any other way to restore the data of the ovirt_cinderlib
database in case a restore the engine-backup would be required?
Best regards,
Thomas
4 years, 11 months
Re: vm console problem
by David David
I did as you said:
copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
/etc/pki/ca-trust/source/anchors and then run update-ca-trust
it didn’t help, still the same errors
вс, 29 мар. 2020 г. в 10:47, David David <dd432690(a)gmail.com>:
> I did as you said:
> copied from engine /etc/ovirt-engine/ca.pem onto my desktop into
> /etc/pki/ca-trust/source/anchors and then run update-ca-trust
> it didn’t help, still the same errors
>
>
> пт, 27 мар. 2020 г. в 21:56, Strahil Nikolov <hunter86_bg(a)yahoo.com>:
>
>> On March 27, 2020 12:23:10 PM GMT+02:00, David David <dd432690(a)gmail.com>
>> wrote:
>> >here is debug from opening console.vv by remote-viewer
>> >
>> >2020-03-27 14:09 GMT+04:00, Milan Zamazal <mzamazal(a)redhat.com>:
>> >> David David <dd432690(a)gmail.com> writes:
>> >>
>> >>> yes i have
>> >>> console.vv attached
>> >>
>> >> It looks the same as mine.
>> >>
>> >> There is a difference in our logs, you have
>> >>
>> >> Possible auth 19
>> >>
>> >> while I have
>> >>
>> >> Possible auth 2
>> >>
>> >> So I still suspect a wrong authentication method is used, but I don't
>> >> have any idea why.
>> >>
>> >> Regards,
>> >> Milan
>> >>
>> >>> 2020-03-26 21:38 GMT+04:00, Milan Zamazal <mzamazal(a)redhat.com>:
>> >>>> David David <dd432690(a)gmail.com> writes:
>> >>>>
>> >>>>> copied from qemu server all certs except "cacrl" to my
>> >desktop-station
>> >>>>> into /etc/pki/
>> >>>>
>> >>>> This is not needed, the CA certificate is included in console.vv
>> >and no
>> >>>> other certificate should be needed.
>> >>>>
>> >>>>> but remote-viewer is still didn't work
>> >>>>
>> >>>> The log looks like remote-viewer is attempting certificate
>> >>>> authentication rather than password authentication. Do you have
>> >>>> password in console.vv? It should look like:
>> >>>>
>> >>>> [virt-viewer]
>> >>>> type=vnc
>> >>>> host=192.168.122.2
>> >>>> port=5900
>> >>>> password=fxLazJu6BUmL
>> >>>> # Password is valid for 120 seconds.
>> >>>> ...
>> >>>>
>> >>>> Regards,
>> >>>> Milan
>> >>>>
>> >>>>> 2020-03-26 2:22 GMT+04:00, Nir Soffer <nsoffer(a)redhat.com>:
>> >>>>>> On Wed, Mar 25, 2020 at 12:45 PM David David <dd432690(a)gmail.com>
>> >>>>>> wrote:
>> >>>>>>>
>> >>>>>>> ovirt 4.3.8.2-1.el7
>> >>>>>>> gtk-vnc2-1.0.0-1.fc31.x86_64
>> >>>>>>> remote-viewer version 8.0-3.fc31
>> >>>>>>>
>> >>>>>>> can't open vm console by remote-viewer
>> >>>>>>> vm has vnc console protocol
>> >>>>>>> when click on console button to connect to a vm, the
>> >remote-viewer
>> >>>>>>> console disappear immediately
>> >>>>>>>
>> >>>>>>> remote-viewer debug in attachment
>> >>>>>>
>> >>>>>> You an issue with the certificates:
>> >>>>>>
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.238:
>> >>>>>> ../src/vncconnection.c Set credential 2 libvirt
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Searching for certs in /etc/pki
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Searching for certs in /root/.pki
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate CA/cacert.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c No CA certificate provided, using GNUTLS
>> >global
>> >>>>>> trust
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate CA/cacrl.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate
>> >>>>>> libvirt/private/clientkey.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Failed to find certificate
>> >>>>>> libvirt/clientcert.pem
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Waiting for missing credentials
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c Got all credentials
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.239:
>> >>>>>> ../src/vncconnection.c No CA certificate provided; trying the
>> >system
>> >>>>>> trust store instead
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> >>>>>> ../src/vncconnection.c Using the system trust store and CRL
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> >>>>>> ../src/vncconnection.c No client cert or key provided
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.240:
>> >>>>>> ../src/vncconnection.c No CA revocation list provided
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.241:
>> >>>>>> ../src/vncconnection.c Handshake was blocking
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.243:
>> >>>>>> ../src/vncconnection.c Handshake was blocking
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.251:
>> >>>>>> ../src/vncconnection.c Handshake was blocking
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> >>>>>> ../src/vncconnection.c Handshake done
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.298:
>> >>>>>> ../src/vncconnection.c Validating
>> >>>>>> (remote-viewer:2721): gtk-vnc-DEBUG: 11:56:25.301:
>> >>>>>> ../src/vncconnection.c Error: The certificate is not trusted
>> >>>>>>
>> >>>>>> Adding people that may know more about this.
>> >>>>>>
>> >>>>>> Nir
>> >>>>>>
>> >>>>>>
>> >>>>
>> >>>>
>> >>
>> >>
>>
>> Hello,
>>
>> You can try to take the engine's CA (maybe it's useless) and put it on
>> your system in:
>> /etc/pki/ca-trust/source/anchors (if it's EL7 or a Fedora) and then run
>> update-ca-trust
>>
>> Best Regards,
>> Strahil Nikolov
>>
>
4 years, 11 months
Import storage domain with different storage type?
by Rik Theys
Hi,
We have an oVirt environment with a FC storage domain. Multiple LUNs on
a SAN are exported to the oVirt nodes and combined in a single FC
storage domain.
The SAN replicates the disks to another storage box that has iSCSI
connectivity.
Is it possible to - in case of disaster - import the existing,
replicated, storage domain as an iSCSI domain and import/run the VM's
from that domain? Or is import of a storage domain only possible if they
are the same type? Does it also work if multiple LUNs are needed to form
the storage domain?
Are there any special actions that should be performed beyond the
regular import action?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
4 years, 11 months
Doubts related to single HCI and storage network
by Gianluca Cecchi
Hello,
I deployed HCI 4.3.9 with gluster and single node from the cockpit based
interface.
During install I specified the storage network, using
1) For the mgmt network and hostname of the hypervisor
172.16.0.30 ovirt.mydomain
2) for the storage network (even if not used in single host... but in case
of future addition..)
10.50.50.11 ovirtst.mydomain.storage
all went good. And system runs quite ok: I was able to deploy an OCP 4.3.8
cluster with 3 workers and 3 masters... a part from erratic "vm paused"
messages for which I'm going to send a dedicated mail...
I see in engine.log warning messages of this kind:
2020-03-27 00:32:08,655+01 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [15cbd52e] Could not associate brick
'ovirtst.mydomain.storage:/gluster_bricks/engine/engine' of volume
'40ad3b5b-4cc1-495a-815b-3c7e3436b15b' with correct network as no gluster
network found in cluster '9cecfa02-6c6c-11ea-8a94-00163e0acd5c'
I would have expected the setup to create a gluster network as it was part
of the initial configuration.... could this be a subject for an RFE?
What can I do to fix this warning?
Thanks,
Gianluca
4 years, 11 months
vm console problem
by David David
ovirt 4.3.8.2-1.el7
gtk-vnc2-1.0.0-1.fc31.x86_64
remote-viewer version 8.0-3.fc31
can't open vm console by remote-viewer
vm has vnc console protocol
when click on console button to connect to a vm, the remote-viewer
console disappear immediately
remote-viewer debug in attachment
4 years, 11 months
Speed Issues
by Christian Reiss
Hey folks,
gluster related question. Having SSD in a RAID that can do 2 GB writes
and Reads (actually above, but meh) in a 3-way HCI cluster connected
with 10gbit connection things are pretty slow inside gluster.
I have these settings:
Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.shd-max-threads: 8
features.shard: on
features.shard-block-size: 64MB
server.event-threads: 8
user.cifs: off
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.eager-lock: enable
performance.low-prio-threads: 32
network.ping-timeout: 30
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: true
client.event-threads: 16
performance.strict-o-direct: on
network.remote-dio: enable
performance.client-io-threads: on
nfs.disable: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
cluster.readdir-optimize: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
cluster.entry-self-heal: on
cluster.data-self-heal-algorithm: full
features.uss: enable
features.show-snapshot-directory: on
features.barrier: disable
auto-delete: enable
snap-activate-on-create: enable
Writing inside the /gluster_bricks yields those 2GB/sec writes, Reading
the same.
Reading inside the /rhev/data-center/mnt/glusterSD/ dir reads go down to
366mb/sec while writes plummet to to 200mb/sec.
Summed up: Writing into the SSD Raid in the lvm/xfs gluster brick
directory is fast, writing into the mounted gluster dir is horribly slow.
The above can be seen and repeated on all 3 servers. The network can do
full 10gbit (tested with, among others: rsync, iperf3).
Anyone with some idea on whats missing/ going on here?
Thanks folks,
as always stay safe and healthy!
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 11 months
bare-metal to self-hosted engine
by kim.kargaard@noroff.no
Hi,
We currently have an ovirt engine running on a server. The server has CentOS installed and the ovirt-engine installed, but is not a node that hosts VM's. I would like to move the ovirt-engine to a self-hosted engine and it seems like this articles is the one to follow: https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_...
Am I correct that I can migrate from a bare-metal CentOS server engine to a self-hosted VM of the engine and is the documentation above the only documentation I will need to complete this process?
Kind regards
Kim
4 years, 11 months
can't run VM
by garcialiang.anne@gmail.com
Hi,
I created VM on ovirt-engine. But I can't run this VM. The message is :
2020-03-26 21:28:02,745+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-147) [] EVENT_ID: USER_FAILED_RUN_VM(54), Failed to run VM VirtMachine due to a failed validation: [Cannot run VM. There is no host that satisfies current scheduling constraints. See below for details:, The host xxxxxxxxx did not satisfy internal filter Network because display network ovirtmgmt was missing.] (User: admin@internal-authz).
Could you help me ?
Thanks
Anne
4 years, 11 months
Orphaned ISO Storage Domain
by bob.franzke@mdaemon.com
Greetings all,
Full disclosure, complete OVIRT novice here. I inherited an OVIRT system and had a complete ovirt-engine back in December-January. Because of time and my inexperience with OVIRT, I had to resort to hiring consultants to rebuild my OVIRT engine from backups. That’s a situation I never want to repeat.
Anyway, we were able to piece it together and at least get most functionality back. The previous setup had a ISO storage domain called ‘ISO-COLO’ that seems to have been hosted on the engine server itself. The engine hostname is ‘mydesktop’. We restored the engine from backups I had taken of the SQL DB and various support files using the built in OVIRT backup tool.
So now when looking into the OVIRT console, I see the storage domain listed. It has a status of ‘inactive’ showing in the list of various storage domains we have setup for this. We tried to ‘activate’ it and it fails activation. The path listed for the domain is mydesktop:/gluster/colo-iso. On the host however there is no mountpoint that equates to that path:
[root@mydesktop ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 47G 0 47G 0% /dev
tmpfs 47G 12K 47G 1% /dev/shm
tmpfs 47G 131M 47G 1% /run
tmpfs 47G 0 47G 0% /sys/fs/cgroup
/dev/mapper/centos-root 50G 5.4G 45G 11% /
/dev/sda2 1014M 185M 830M 19% /boot
/dev/sda1 200M 12M 189M 6% /boot/efi
/dev/mapper/centos-home 224G 15G 210G 7% /home
tmpfs 9.3G 0 9.3G 0% /run/user/0
The original layout looked like this on the broken engine:
[root@mydesktop ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_mydesktop-root 50G 27G 20G 58% /
devtmpfs 24G 0 24G 0% /dev
tmpfs 24G 28K 24G 1% /dev/shm
tmpfs 24G 42M 24G 1% /run
tmpfs 24G 0 24G 0% /sys/fs/cgroup
/dev/mapper/centos_mydesktop-home 25G 45M 24G 1% /home
/dev/sdc1 1014M 307M 708M 31% /boot
/dev/mapper/centos_mydesktop-gluster 177G 127G 42G 76% /gluster
tmpfs 4.7G 0 4.7G 0% /run/user/0
So it seems the orphaned storage domain is just point to a path that does not exist on the new Engine host.
Also noticed some of the hosts are trying to aces this storage domain and getting errors:
The error message for connection mydesktop:/gluster/colo-iso returned by VDSM was: Problem while trying to mount target
3/17/2010:47:05 AM
Failed to connect Host vm-host-colo-2 to the Storage Domains ISO-Colo.
3/17/2010:47:05 AM
So it seems hosts are trying to be connected to this storage domain but cannot because its not there. Any of the files from the original path are not available so I am not even sure what we are missing if anything.
So what are my options here. Destroy the current ISO domain and recreate it, or somehow provide the correct path on the engine server? Currently the storage space I can use is mounted with /home, which is a different path than the original one. Not sure if anything can be done with the disk layout at this point to correct this on the engine server itself to get the gluster path back. Right now we cannot attach CDs to VMs for booting. No choices show up for use when doing a ‘run once’ on an existing VM so I would like to get this working so I can fix a broken VM that I need to boot off of ISO media.
Thanks in advance for any help you can provide.
4 years, 11 months