glusterevents daemon fails after upgrade from 4.2.8 to 4.3
by Edward Berger
I upgraded some nodes from 4.28 to 4.3 and now when I look at the cockpit
"services"
tab I see a red failure for Gluster Events Notifier and clicking through I
get these messages below.
14:00
glustereventsd.service failed.
systemd
14:00
Unit glustereventsd.service entered failed state.
systemd
14:00
glustereventsd.service: main process exited, code=exited, status=1/FAILURE
systemd
14:00
ValueError: Attempted relative import in non-package
glustereventsd
14:00
from .eventsapiconf import (LOG_FILE,
glustereventsd
14:00
File "/usr/libexec/glusterfs/events/utils.py", line 29, in <module>
glustereventsd
14:00
import utils
glustereventsd
14:00
File "/usr/libexec/glusterfs/events/handlers.py", line 12, in <module>
glustereventsd
14:00
import handlers
glustereventsd
14:00
File "/usr/sbin/glustereventsd", line 24, in <module>
glustereventsd
5 years, 10 months
Re: [ovirt-users] Large DWH Database, how to empty
by Matt .
Hi,
OK thanks! I saw that after upgrading to 4.0.5 from 4.0.4 the DB
already dropped with around 500MB directly and is now at 2GB smaller.
Does this sounds familiar to you with other settings in 4.0.5 ?
Thanks,
Matt
2017-01-08 10:45 GMT+01:00 Shirly Radco <sradco(a)redhat.com>:
> No. That will corrupt your database.
>
> Are you using the full dwh or the smaller version for the dashboards?
>
> Please set the delete thresholds to save less data and the data older then
> the time you set will be deleted.
> Add a file to /ovirt-engine-dwhd.conf.d/
> update_time_to_keep_records.conf
>
> Add these lines with the new configurations. The numbers represent the hours
> to keep the data.
>
> DWH_TABLES_KEEP_SAMPLES=24
> DWH_TABLES_KEEP_HOURLY=1440
> DWH_TABLES_KEEP_DAILY=43800
>
>
> These are the configurations for a full dwh.
>
> The smaller version configurations are:
> DWH_TABLES_KEEP_SAMPLES=24
> DWH_TABLES_KEEP_HOURLY=720
> DWH_TABLES_KEEP_DAILY=0
>
> The delete process by default at 3am every day (DWH_DELETE_JOB_HOUR=3)
>
> Best regards,
>
> Shirly Radco
>
> BI Software Engineer
> Red Hat Israel Ltd.
> 34 Jerusalem Road
> Building A, 4th floor
> Ra'anana, Israel 4350109
>
>
> On Fri, Jan 6, 2017 at 6:35 PM, Matt . <yamakasi.014(a)gmail.com> wrote:
>>
>> Hi,
>>
>> I seem to have some large database for the DWH logging and I wonder
>> how I can empty it safely.
>>
>> Can I just simply empty the database ?
>>
>> Have a good weekend!
>>
>> Cheers,
>>
>> Matt
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
5 years, 10 months
engine-iso-uploader very slow
by t.marchetti@caen.it
Hi,
I'm a newbe with ovirt, I've install on my test machine ovirt with self hosted engine with nfs storage. I'm tryng to execute <engine-iso-uploader upload -i ISO /tmp/ubuntu-18.04-desktop-amd64.iso> and the execution is very slow, approx 2 hour, but I don't know why, any suggestion?
Thanks in advance
Bye
5 years, 10 months
Re: [ovirt-users] Packet loss
by Doron Fediuck
----_com.android.email_640187878761650
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
SGkgS3lsZSzCoApXZSBtYXkgaGF2ZSBzZWVuIHNvbWV0aGluZyBzaW1pbGFyIGluIHRoZSBwYXN0
IGJ1dCBJIHRoaW5rIHRoZXJlIHdlcmUgdmxhbnMgaW52b2x2ZWQuwqAKSXMgaXQgdGhlIHNhbWUg
Zm9yIHlvdT/CoApUb255IC8gRGFuLCBkb2VzIGl0IHJpbmcgYSBiZWxsP8Kg
----_com.android.email_640187878761650
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keSA+PGRpdj5IaSBLeWxlLCZuYnNwOzwv
ZGl2PjxkaXY+V2UgbWF5IGhhdmUgc2VlbiBzb21ldGhpbmcgc2ltaWxhciBpbiB0aGUgcGFzdCBi
dXQgSSB0aGluayB0aGVyZSB3ZXJlIHZsYW5zIGludm9sdmVkLiZuYnNwOzwvZGl2PjxkaXY+SXMg
aXQgdGhlIHNhbWUgZm9yIHlvdT8mbmJzcDs8L2Rpdj48ZGl2PlRvbnkgLyBEYW4sIGRvZXMgaXQg
cmluZyBhIGJlbGw/Jm5ic3A7PC9kaXY+PC9ib2R5PjwvaHRtbD4=
----_com.android.email_640187878761650--
5 years, 10 months
Re: oVirt Node install failed
by Strahil
I guess you can log on ovirt2 and run
hosted-engine --set-maintenance --mode=local && sleep 30 && hosted-engine --vm-status
Then reinstall ovirt2 from the web UI and mark the engine for deployment.
Once the reinstall is over - remove maintenance via :
hosted-engine --set-maintenance --mode=none
Best Regards,
Strahil Nikolov
5 years, 10 months
Move infrastructure ,how to change FQDN
by Fabrice SOLER
Hello,
I need to move all the physical infrastructure oVirt to another site
(for student education).
The node's FQDN and the hosted engine's FQDN must change.
The version is ovirt 4.2.8 for the hosted engine and nodes.
Is there sommone who know how to do ?
Fabrice SOLER
5 years, 10 months
[ANN] oVirt 4.3.1 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.1 Second Release Candidate, as of February 26th, 2019.
This update is a release candidate of the first in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is available for EL7 only
- oVirt Node is available for EL7 only [2]
- Fedora 28 based appliance and node couldn't be built due to a bug in
Lorax (the tool used to build the images) affecting Fedora 28.
Additional Resources:
* Read more about the oVirt 4.3.1 release highlights:
http://www.ovirt.org/release/4.3.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.1/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 10 months
running engine-setup against postgresql running on non-default port
by ivan.kabaivanov@gmail.com
Hi,
I'm running Linux From Scratch and I'm compiling ovirt from source. It all builds and installs fine. However, since I may sometimes run engine-setup from within a chroot on a host that already runs ovirt-engine and postgresql on port 5432, I would like to be able to start a new instance of postgresql on port 55555 inside the chroot, run engine-setup within the same chroot, tell it to connect to postgresql on port 55555, then once set up, I would like to manually change all config files and change postgresql port from 55555 to 5432, so when I transfer this installation to a different computer postgresql will run on 5432 and ovirt-engine (and all the other components) will know to use port 5432.
Trouble is, this is easier said than done. Even when I tell engine-setup to connect to 55555 something insists on connecting to 5432:
[ INFO ] Installing PostgreSQL uuid-ossp extension into database
[ ERROR ] Failed to execute stage 'Misc configuration': could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I've manually changed all the ovirt-engine and ovirt-engine-dwh files that contained 5432, to no avail.
Ideas?
Thanks,
IvanK.
5 years, 10 months
Gluster setup Problem
by Matthew Roth
I have 3 servers, Node 1 is 3tb /dev/sda, Node 2, 3tb /dev/sdb, node3 3tb /dev/sdb
I start the process for gluster deployment. I change node 1 to sda and all the other ones to sdb. I get no errors however,
when I get to
Creating physical Volume all it does is spin forever . doesnt get any further. I can leave it there for 5 hours and doesn't go anywhere.
#gdeploy configuration generated by cockpit-gluster plugin
[hosts]
cmdnode1.cmd911.com
cmdnode2.cmd911.com
cmdnode3.cmd911.com
[script1:cmdnode1.cmd911.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sda -h cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com
[script1:cmdnode2.cmd911.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com
[script1:cmdnode3.cmd911.com]
action=execute
ignore_script_errors=no
file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h cmdnode1.cmd911.com, cmdnode2.cmd911.com, cmdnode3.cmd911.com
[disktype]
raid6
[diskcount]
12
[stripesize]
256
[service1]
action=enable
service=chronyd
[service2]
action=restart
service=chronyd
[shell2]
action=execute
command=vdsm-tool configure --force
[script3]
action=execute
file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
ignore_script_errors=no
[pv1:cmdnode1.cmd911.com]
action=create
devices=sda
ignore_pv_errors=no
[pv1:cmdnode2.cmd911.com]
action=create
devices=sdb
ignore_pv_errors=no
[pv1:cmdnode3.cmd911.com]
action=create
devices=sdb
ignore_pv_errors=no
[vg1:cmdnode1.cmd911.com]
action=create
vgname=gluster_vg_sda
pvname=sda
ignore_vg_errors=no
[vg1:cmdnode2.cmd911.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[vg1:cmdnode3.cmd911.com]
action=create
vgname=gluster_vg_sdb
pvname=sdb
ignore_vg_errors=no
[lv1:cmdnode1.cmd911.com]
action=create
poolname=gluster_thinpool_sda
ignore_lv_errors=no
vgname=gluster_vg_sda
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB
[lv2:cmdnode2.cmd911.com]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=1005GB
poolmetadatasize=5GB
[lv3:cmdnode3.cmd911.com]
action=create
poolname=gluster_thinpool_sdb
ignore_lv_errors=no
vgname=gluster_vg_sdb
lvtype=thinpool
size=41GB
poolmetadatasize=1GB
[lv4:cmdnode1.cmd911.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sda
mount=/gluster_bricks/engine
size=100GB
lvtype=thick
[lv5:cmdnode1.cmd911.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sda
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sda
virtualsize=500GB
[lv6:cmdnode1.cmd911.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sda
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sda
virtualsize=500GB
[lv7:cmdnode2.cmd911.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=100GB
lvtype=thick
[lv8:cmdnode2.cmd911.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv9:cmdnode2.cmd911.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=500GB
[lv10:cmdnode3.cmd911.com]
action=create
lvname=gluster_lv_engine
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/engine
size=20GB
lvtype=thick
[lv11:cmdnode3.cmd911.com]
action=create
lvname=gluster_lv_data
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/data
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[lv12:cmdnode3.cmd911.com]
action=create
lvname=gluster_lv_vmstore
ignore_lv_errors=no
vgname=gluster_vg_sdb
mount=/gluster_bricks/vmstore
lvtype=thinlv
poolname=gluster_thinpool_sdb
virtualsize=20GB
[selinux]
yes
[service3]
action=restart
service=glusterd
slice_setup=yes
[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
services=glusterfs
[script2]
action=execute
file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
[shell3]
action=execute
command=usermod -a -G gluster qemu
[volume1]
action=create
volname=engine
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=cmdnode1.cmd911.com:/gluster_bricks/engine/engine,cmdnode2.cmd911.com:/gluster_bricks/engine/engine,cmdnode3.cmd911.com:/gluster_bricks/engine/engine
ignore_volume_errors=no
arbiter_count=1
[volume2]
action=create
volname=data
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=cmdnode1.cmd911.com:/gluster_bricks/data/data,cmdnode2.cmd911.com:/gluster_bricks/data/data,cmdnode3.cmd911.com:/gluster_bricks/data/data
ignore_volume_errors=no
arbiter_count=1
[volume3]
action=create
volname=vmstore
transport=tcp
replica=yes
replica_count=3
key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,30,on,off,enable
brick_dirs=cmdnode1.cmd911.com:/gluster_bricks/vmstore/vmstore,cmdnode2.cmd911.com:/gluster_bricks/vmstore/vmstore,cmdnode3.cmd911.com:/gluster_bricks/vmstore/vmstore
ignore_volume_errors=no
arbiter_count=1
5 years, 10 months