Ubuntu 18.04 and 16.04 cloud images hang at boot up
by suaro@live.com
I'm using oVirt 4.3 (latest ) and able to successfully provision Centos VMs without any problems.
When I attempt to provision Ubuntu VMs, they hang at startup.
The console shows :
...
...
[ 4.010016] Btrfs loaded
[ 101.268594] random: nonblocking pool is initialized
It stays like this indefinitely.
Again, I have no problems with Centos images, but need Ubuntu
Any tips greatly appreciated.
4 years, 6 months
ovirt-guest-agent for CentOS 8
by Eduardo Mayoral
Hi,
Just like many of you I am testing my first CentOS 8 VMs on top of ovirt.
I am not finding the package ovirt-guest-agent.noarch . Closest I can
find is qemu-guest-agent.x86_64
After installing and starting it, I do see information reported on the
"Guest info" tab.
Can anybody confirm if this is indeed the agent we should be using? Is
there - or will there be - a more specific package for ovirt guests?
Thanks!
--
Eduardo Mayoral Jimeno
Systems engineer, platform department. Arsys Internet.
emayoral(a)arsys.es - +34 941 620 105 - ext 2153
4 years, 7 months
Re: Hyperconverged setup questions
by Strahil
Hi Marko,
I guess you can use distributed-replicated volumes and oVirt cluster with host triplets.
Best Regards,
Strahil NikolovOn Oct 10, 2019 15:30, "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com> wrote:
>
> Dear oVirt,
>
>
>
> Is it possible to add oVirt 3Hosts/Gluster hyperconverged cluster to existing oVirt setup? I need this to achieve Local storage performance, but still have pool of Hypevisors available.
>
> Is it possible to have more than 3Hosts in Hyperconverged setup?
>
>
>
> I have currently 1Shared Cluster (NFS based storage, where also SHE is hosted) and 2Local Storage clusters.
>
>
>
> oVirt current version running is 4.3.4.
>
>
>
> Kindly awaiting your reply.
>
>
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> Marko Vrgotic
>
> ActiveVideo
>
>
>
>
4 years, 8 months
Re: Damaged hard disk in volume replica gluster
by Strahil
As you are replacing an old brick, you have to recreate the old LV and mount it on the same location.
Then you can use the gluster's "reset-brick" (i thing ovirt UI has that option too) and all data will be replicated there.
You got also the "replace-brick" option if you decided to change the mount location.
P.S.: with replica volumes your volume should be still working, in case it has stopped - you have to investigate before proceeding.
Best Regards,
Strahil NikolovOn Oct 12, 2019 13:12, matteo fedeli <matmilan97(a)gmail.com> wrote:
>
> Hi, I have in my ovirt HCI a volume that not work properly because one of three hdd fail. I bought a new hdd, I recreate old lvm partitioning and mount point. Now, how can I attach this empty new brick?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5XQ4M2K5RB...
4 years, 9 months
Unsynced entries do not self-heal during upgrade from oVirt 4.2 -> 4.3
by Goorkate, B.J.
Hi all,
I'm in the process of upgrading oVirt-nodes from 4.2 to 4.3.
After upgrading the first of 3 oVirt/gluster nodes, there are between 600-1200 unsynced entries for a week now on 1 upgraded node and one not-yet-upgraded node. The third node (also not-yet-upgraded) says it's OK (no unsynced entries).
The cluster doesn't seem to be very busy, but somehow self-heal doesn't complete.
Is this because of different gluster versions across the nodes and will it resolve as soon as I upgraded all nodes? Since it's our production cluster, I don't want to take any risk...
Does anybody recognise this problem? Of course I can provide more information if necessary.
Any hints on troubleshooting the unsynced entries are more than welcome!
Thanks in advance!
Regards,
Bertjan
------------------------------------------------------------------------------
De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is
uitsluitend bestemd voor de geadresseerde. Indien u dit bericht onterecht
ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender direct
te informeren door het bericht te retourneren. Het Universitair Medisch
Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van de W.H.W.
(Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat geregistreerd bij
de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
------------------------------------------------------------------------------
This message may contain confidential information and is intended exclusively
for the addressee. If you receive this message unintentionally, please do not
use the contents but notify the sender immediately by return e-mail. University
Medical Center Utrecht is a legal person by public law and is registered at
the Chamber of Commerce for Midden-Nederland under no. 30244197.
Please consider the environment before printing this e-mail.
4 years, 9 months
Vm suddenly paused with error "vm has paused due to unknown storage error"
by Jasper Siero
Hi all,
Since we upgraded our Ovirt nodes to CentOS 7 a vm (not a specific one but never more then one) will sometimes pause suddenly with the error "VM ... has paused due to unknown storage error". It happens now two times in a month.
The Ovirt node uses san storage for the vm's running on it. When a specific vm is pausing with an error the other vm's keeps running without problems.
The vm runs without problems after unpausing it.
Versions:
CentOS Linux release 7.1.1503
vdsm-4.14.17-0
libvirt-daemon-1.2.8-16
vdsm.log:
VM Channels Listener::DEBUG::2015-10-25 07:43:54,382::vmChannels::95::vds::(_handle_timeouts) Timeout on fileno 78.
libvirtEventLoop::INFO::2015-10-25 07:43:56,177::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
libvirtEventLoop::DEBUG::2015-10-25 07:43:56,178::vm::5204::vm.Vm::(_onLibvirtLifecycleEvent) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::event Suspended detail 2 opaque None
libvirtEventLoop::INFO::2015-10-25 07:43:56,178::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
...........
libvirtEventLoop::INFO::2015-10-25 07:43:56,180::vm::4602::vm.Vm::(_onIOError) vmId=`77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb`::abnormal vm stop device virtio-disk0 error eother
specific error part in libvirt vm log:
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
...........
block I/O error in device 'drive-virtio-disk0': Unknown error 32758 (32758)
engine.log:
2015-10-25 07:44:48,945 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-40) [a43dcc8] VM diataal-prod-cas1 77f07ae0-cc3e-4ae2-90ec-7fba7b11deeb moved from
Up --> Paused
2015-10-25 07:44:49,003 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-40) [a43dcc8] Correlation ID: null, Call Stack: null, Custom Event
ID: -1, Message: VM diataal-prod-cas1 has paused due to unknown storage error.
Has anyone experienced the same problem or knows a way to solve this?
Kind regards,
Jasper
4 years, 9 months
hosted-engine --deploy fails after "Wait for the host to be up" task
by Fredy Sanchez
*Hi all,*
*[root@bric-ovirt-1 ~]# cat /etc/*release**
CentOS Linux release 7.7.1908 (Core)
*[root@bric-ovirt-1 ~]# yum info ovirt-engine-appliance*
Installed Packages
Name : ovirt-engine-appliance
Arch : x86_64
Version : 4.3
Release : 20191121.1.el7
Size : 1.0 G
Repo : installed
From repo : ovirt-4.3
*Same situation as https://bugzilla.redhat.com/show_bug.cgi?id=1787267
<https://bugzilla.redhat.com/show_bug.cgi?id=1787267>. The error message
almost everywhere is some red herring message about ansible*
[ INFO ] TASK [ovirt.hosted_engine_setup : Wait for the host to be up]
[ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts":
[]}, "attempts": 120, "changed": false, "deprecations": [{"msg": "The
'ovirt_host_facts' module has been renamed to 'ovirt_host_info', and the
renamed one no longer returns ansible_facts", "version": "2.13"}]}
[ INFO ] TASK [ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The
system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing
ansible-playbook
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the
issue, fix accordingly or re-deploy from scratch.
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200126170315-req4qb.log
*But the "real" problem seems to be SSH related, as you can see below*
*[root@bric-ovirt-1 ovirt-engine]# pwd*
/var/log/ovirt-hosted-engine-setup/engine-logs-2020-01-26T17:19:28Z/ovirt-engine
*[root@bric-ovirt-1 ovirt-engine]# grep -i error engine.log*
2020-01-26 17:26:50,178Z ERROR
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-1)
[2341fd23-f0c7-4f1c-ad48-88af20c2d04b] Failed to establish session with
host 'bric-ovirt-1.corp.modmed.com': SSH session closed during connection '
root(a)bric-ovirt-1.corp.modmed.com'
2020-01-26 17:26:50,205Z ERROR
[org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default
task-1) [] Operation Failed: [Cannot add Host. Connecting to host via SSH
has failed, verify that the host is reachable (IP address, routable address
etc.) You may refer to the engine.log file for further details.]
*The funny thing is that the engine can indeed ssh to bric-ovirt-1
(physical host). See below*
*[root@bric-ovirt-1 ovirt-hosted-engine-setup]# cat /etc/hosts*
192.168.1.52 bric-ovirt-engine.corp.modmed.com # temporary entry added by
hosted-engine-setup for the bootstrap VM
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6
localhost6.localdomain6
10.130.0.50 bric-ovirt-engine bric-ovirt-engine.corp.modmed.com
10.130.0.51 bric-ovirt-1 bric-ovirt-1.corp.modmed.com
10.130.0.52 bric-ovirt-2 bric-ovirt-2.corp.modmed.com
10.130.0.53 bric-ovirt-3 bric-ovirt-3.corp.modmed.com
192.168.0.1 bric-ovirt-1gluster bric-ovirt-1gluster.corp.modmed.com
192.168.0.2 bric-ovirt-2gluster bric-ovirt-2gluster.corp.modmed.com
192.168.0.3 bric-ovirt-3gluster bric-ovirt-3gluster.corp.modmed.com
[root@bric-ovirt-1 ovirt-hosted-engine-setup]#
*[root@bric-ovirt-1 ~]# ssh 192.168.1.52*
Last login: Sun Jan 26 17:55:20 2020 from 192.168.1.1
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
*[root@bric-ovirt-engine ~]# ssh bric-ovirt-1*
Password:
Password:
Last failed login: Sun Jan 26 18:17:16 UTC 2020 from 192.168.1.52 on
ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Sun Jan 26 18:16:46 2020
###################################################################
# UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED #
# #
# This system is the property of Modernizing Medicine, Inc. #
# It is for authorized Company business purposes only. #
# All connections are monitored and recorded. #
# Disconnect IMMEDIATELY if you are not an authorized user! #
###################################################################
[root@bric-ovirt-1 ~]#
[root@bric-ovirt-1 ~]#
[root@bric-ovirt-1 ~]# exit
logout
Connection to bric-ovirt-1 closed.
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
*[root@bric-ovirt-engine ~]# ssh bric-ovirt-1.corp.modmed.com
<http://bric-ovirt-1.corp.modmed.com>*
Password:
Last login: Sun Jan 26 18:17:22 2020 from 192.168.1.52
###################################################################
# UNAUTHORIZED ACCESS TO THIS SYSTEM IS PROHIBITED #
# #
# This system is the property of Modernizing Medicine, Inc. #
# It is for authorized Company business purposes only. #
# All connections are monitored and recorded. #
# Disconnect IMMEDIATELY if you are not an authorized user! #
###################################################################
[root@bric-ovirt-1 ~]# exit
logout
Connection to bric-ovirt-1.corp.modmed.com closed.
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]#
[root@bric-ovirt-engine ~]# exit
logout
Connection to 192.168.1.52 closed.
[root@bric-ovirt-1 ~]#
*So, what gives? I already disabled all ssh security in the physical host,
and whitelisted all potential IPs from the engine using firewalld.
Regardless, the engine can ssh to the host as root :-(. Is there maybe
another user that's used for the "Wait for the host to be up" SSH test?
Yes, I tried both passwords and certificates.*
*Maybe what's really happening is that engine is not getting the right IP?
bric-ovirt-engine is supposed to get 10.130.0.50, instead it never gets
there, getting 192.168.1.52 from virbr0 in bric-ovirt-1. See below.*
--== HOST NETWORK CONFIGURATION ==--
Please indicate the gateway IP address [10.130.0.1]
Please indicate a nic to set ovirtmgmt bridge on: (p4p1, p5p1)
[p4p1]:
--== VM CONFIGURATION ==--
You may specify a unicast MAC address for the VM or accept a randomly
generated default [00:16:3e:17:1d:f8]:
How should the engine VM network be configured (DHCP,
Static)[DHCP]? static
Please enter the IP address to be used for the engine VM []:
10.130.0.50
[ INFO ] The engine VM will be configured to use 10.130.0.50/25
Please provide a comma-separated list (max 3) of IP addresses of
domain name servers for the engine VM
Engine VM DNS (leave it empty to skip) [10.130.0.2,10.130.0.3]:
Add lines for the appliance itself and for this host to
/etc/hosts on the engine VM?
Note: ensuring that this host could resolve the engine VM
hostname is still up to you
(Yes, No)[No] Yes
*[root@bric-ovirt-1 ~]# ip addr*
3: p4p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group
default qlen 1000
link/ether 00:0a:f7:f1:c6:80 brd ff:ff:ff:ff:ff:ff
inet 10.130.0.51/25 brd 10.130.0.127 scope global noprefixroute p4p1
valid_lft forever preferred_lft forever
28: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP group default qlen 1000
link/ether 52:54:00:25:7b:6f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/24 brd 192.168.1.255 scope global virbr0
valid_lft forever preferred_lft forever
29: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master
virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:25:7b:6f brd ff:ff:ff:ff:ff:ff
30: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:17:1d:f8 brd ff:ff:ff:ff:ff:ff
*The newly created engine VM does remain up even after hosted-engine
--deploy errors out; just at the wrong IP. I haven't been able to make it
get its real IP. At any rate, thank you very much for taking a look at my
very long email. Any and all help would be really appreciated.*
Cheers,
--
Fredy
--
*CONFIDENTIALITY NOTICE:* This e-mail message may contain material
protected by the Health Insurance Portability and Accountability Act of
1996 and its implementing regulations and other state and federal laws and
legal privileges. This message is only for the personal and confidential
use of the individuals or organization to whom the message is addressed. If
you are an unintended recipient, you have received this message in error,
and any reading, distributing, copying or disclosure is unauthorized and
strictly prohibited. All recipients are hereby notified that any
unauthorized receipt does not waive any confidentiality obligations or
privileges. If you have received this message in error, please notify the
sender immediately at the above email address and confirm that you have
deleted or destroyed the message.
4 years, 9 months
Re: Cannot Increase Hosted Engine VM Memory
by Serhiy Morhun
Hello, did anyone find a resolution for this issue? I'm having exactly the
same problem:
Hosted Engine VM is running with 5344MB of RAM, when trying to increase to
8192 it would not accept the change because the difference is not divisible
by 256.
When trying to increase to 8160 the change is accepted but log shows
"Hotset memory: changed the amount of memory on VM HostedEngine from 5344
to 5344" at the same time amount of guaranteed memory does increase to 8160
which, in turn, starts generating error messages that VM does not have all
the guaranteed RAM.
Serhiy Morhun
--
*-----------------------------------------------------------------------------------***
*THE INFORMATION CONTAINED IN THIS MESSAGE (E-MAIL AND ANY ATTACHMENTS) IS
INTENDED ONLY FOR THE INDIVIDUAL AND CONFIDENTIAL USE OF THE DESIGNATED
RECIPIENT(S).*
If any reader of this message is not an intended recipient
or any agent responsible for delivering it to an intended recipient, you
are hereby notified that you have received this document in error, and that
any review, dissemination, distribution, copying or other use of this
message is prohibited. If you have received this message in error, please
notify us immediately by reply e-mail message or by telephone and delete
the original message from your e-mail system and/or computer database.
Thank you.
*-----------------------------------------------------------------------------------*
**NOTICE**:
*You are advised that e-mail correspondence and attachments
between the public and the Ridgewood Board of Education are obtainable by
any person who files a request under the NJ Open Public Records Act (OPRA)
unless it is subject to a specific OPRA exception. You should have no
expectation that the content of e-mails sent to or from school district
e-mail addresses, or between the public and school district officials and
employees, will remain private.*
*-----------------------------------------------------------------------------------*
4 years, 9 months
[ANN] oVirt 4.3.9 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.9 First Release Candidate for testing, as of January 30th, 2020.
This update is a release candidate of the nineth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node will be available soon
Additional Resources:
* Read more about the oVirt 4.3.9 release highlights:
http://www.ovirt.org/release/4.3.9/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.9/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
4 years, 9 months