Re: [Users] Glusterd en Glusterfsd services are not set to autostart on install
by Joop
Sahina Bose wrote:
>
> On 03/28/2014 05:26 PM, Joop wrote:
>> On 28-3-2014 12:38, Sahina Bose wrote:
>>>
>>> On 03/28/2014 04:02 PM, Joop wrote:
>>>> Setting up oVirt with a separate cluster for storage using gluster
>>>> we discovered that the gluster services are not set to autostart
>>>> for the default runlevels.
>>>> Host(s) were clean Centos-6.5 hosts with the correct repositories
>>>> and then added from the webui. Hosts got all the packages and
>>>> gluster was correctly started for that session but after a reboot
>>>> gluster isn't running and the host was set to non-operational.
>>>>
>>>> Is this a known problem and if not should I file a BZ but under
>>>> which category?
>>>
>>> Which version of gluster are you using?
>>>
>>> [Adding gluster-users]
>>>
>>>
>> Sorry, the version that comes with the repos ovirt.repo and epel so
>> 3.4.2. Version shouldn't matter because I expect that the hostdeploy
>> from ovirt-engine to take care of making sure that glusterd and
>> glusterfsd are started on system startup.
>
> I think the chkconfig values are set by gluster rpm - which is why I
> added gluster users.
>
> AFAIU, host-deploy installs the rpm and starts the services after
> installation, but does not change the chkconfig value.
>
Thats correct host-deploy starts the services but doesn't change the
chkconfig value and my collegues get bitten by it. They are used to
Debian/Ubuntu where packages which install services (almost) always make
sure that they are started at boot. I noticed in CentOS that is normally
not the case, oVirt being the exception :-)
Regards,
Joop
10 years, 8 months
[Users] ovirt-engine-dwh missing dom4j dependency
by Jorick Astrego
Hi,
I cannot install ovirt-engine-dwh (3.4) on Centos 6.5 because missing
dom4j:
---> Package ovirt-engine-dwh.noarch 0:3.4.0-2.el6 will be installed
--> Processing Dependency: dom4j for package:
ovirt-engine-dwh-3.4.0-2.el6.noarch
--> Finished Dependency Resolution
Error: Package: ovirt-engine-dwh-3.4.0-2.el6.noarch (ovirt-3.4-stable)
Requires: dom4j
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Kind regards,
Jorick Astrego
10 years, 8 months
Re: [Users] engine-setup fails on nfs
by Alon Bar-Lev
----- Original Message -----
> From: "Jorick Astrego" <j.astrego(a)netbulae.eu>
> To: "Alon Bar-Lev" <alonbl(a)redhat.com>
> Sent: Friday, March 28, 2014 4:38:07 PM
> Subject: Re: [Users] engine-setup fails on nfs
>
> On Fri, 2014-03-28 at 09:23 -0400, Alon Bar-Lev wrote:
>
> >
> > ----- Original Message -----
> > > From: "Jorick Astrego" <j.astrego(a)netbulae.eu>
> > > To: "Alon Bar-Lev" <alonbl(a)redhat.com>
> > > Sent: Friday, March 28, 2014 4:12:47 PM
> > > Subject: Re: [Users] engine-setup fails on nfs
> > >
> > > On Fri, 2014-03-28 at 09:08 -0400, Alon Bar-Lev wrote:
> > >
> > > >
> > > > ----- Original Message -----
> > > > > From: "Jorick Astrego" <j.astrego(a)netbulae.eu>
> > > > > To: "users" <users(a)ovirt.org>
> > > > > Sent: Friday, March 28, 2014 4:00:16 PM
> > > > > Subject: Re: [Users] engine-setup fails on nfs
> > > > >
> > > > > Here you go:
> > > > >
> > > > > netstat -tulpn
> > > >
> > > > can you please try to restart nfs service manually? and see if that
> > > > error
> > > > resumes?
> > >
> > >
> > > service nfs restart
> > > Shutting down NFS daemon: [FAILED]
> > > Shutting down NFS mountd: [ OK ]
> > > Shutting down NFS quotas: [ OK ]
> > > Shutting down NFS services: [ OK ]
> > > Shutting down RPC idmapd: [FAILED]
> > > Starting NFS services: [ OK ]
> > > Starting NFS quotas: [ OK ]
> > > Starting NFS mountd: [ OK ]
> > > Starting NFS daemon: [ OK ]
> > > Starting RPC idmapd: [ OK ]
> > >
> > >
> >
> > and another restart? do you see these errors again?
> > can you please try to perform basic problem determination? this is not
> > directly related to ovirt.
> > the only relevant but is per what you found at /etc/sysconfig/nfs which can
> > be empty file to restore defaults, but these defaults are ok for quite a
> > while.
> >
> > thanks!
>
>
> I would have searched for a problem not related to oVirt, if it hadn't
> been a clean minimal install of Centos 6.5. I've used the same process
> for beta, rc, rc2 and it all worked without these nfs problems.
>
> Having installed only the minimal OS and not having done anything except
> "yum -y update" and "reboot" before installing ovirt-engine, I didn't
> install nfs-utils myself. So it would be logical that something in the
> ovirt setup created this problem.
unlikely.
Didi, can you please help? this machine experiencing nfs service start errors.
>
> Mar 28 11:27:50 Installed: ovirt-engine-sdk-python-3.4.0.6-1.el6.noarch
> Mar 28 11:27:52 Installed: ovirt-image-uploader-3.4.0-1.el6.noarch
> Mar 28 11:27:53 Installed: ovirt-log-collector-3.4.1-1.el6.noarch
> Mar 28 11:27:54 Installed: ovirt-iso-uploader-3.4.0-1.el6.noarch
> Mar 28 11:28:01 Installed: ovirt-engine-cli-3.4.0.5-1.el6.noarch
> Mar 28 11:28:03 Installed: keyutils-1.4-4.el6.x86_64
> Mar 28 11:28:05 Installed: nfs-utils-lib-1.1.5-6.el6.x86_64
> Mar 28 11:28:08 Installed: 1:nfs-utils-1.2.3-39.el6.x86_64
>
>
> I can reinstall the VM and do the exact same steps to reproduce if you
> like.....
>
> Kind regards,
>
> Jorick Astrego
> Netbulae B.V.
>
>
>
>
>
>
>
10 years, 8 months
[Users] [Ann] oVirt 3.4 GA Releases
by Brian Proffitt
The oVirt Project is pleased to announce the general availability of its fifth formal release, oVirt 3.4, as of March 27, 2014.
oVirt is an open source alternative to VMware vSphere, and provides an excellent KVM management interface for multi-node virtualization. oVirt is available now for Fedora 19, Red Hat Enterprise Linux 6.5, and CentOS 6.5 (or similar).
New features include:
oVirt 3.4 Release Notes
* Hosted Engine: oVirt 3.4 features hosted engine, which enables oVirt engine to be run as a virtual machine (VM) on the host it manages. Hosted engine solves the chicken-and-the egg problem for users: the basic challenge of deploying and running an oVirt engine inside a VM. This clustered solution enables users to configure multiple hosts to run the hosted engine, ensuring the engine still runs in the event of any one host failure.
* Enhanced Gluster Support: Gluster Volume Asynchronous Tasks Management enables users to re-balance volumes and remove bricks in Gluster operations/rebalance and remove bricks in Gluster volumes.
* Preview: PPC64: Engine Support for PPC64 will add PPC64 architecture awareness to the ovirt-engine code, which currently makes various assumptions based on the x86 architecture. When specifying virtual machine devices, for example, what is suitable for x86 architecture may not be for POWER (or may not be available yet). VDSM Support for PPC64 introduces the capability of managing KVM on IBM POWER processors via oVirt. Administrators will be able to perform management functionalities such as adding or activating KVM, creating clusters of KVM and performing VM lifecycle management on any IBM POWER host. Migration is still a work in progress for KVM on IBM POWER processor.
* Preview: Hot-plug CPUs: oVirt 3.4 adds a preview of a Hot-plug CPU feature that enables administrators to ensure customer's service-level agreements are being met, the full utilization of spare hardware, and the capability to dynamically to scale vertically, down or up, a system's hardware according to application needs without restarting the virtual machine.
This release of oVirt also includes numerous bug fixes. See the release notes [1] for a complete list of the new features and bugs fixed.
The existing repository ovirt-stable has been updated for delivering this release without the need of enabling any other repository.
A new oVirt Node build is also available [2].
[1] http://www.ovirt.org/OVirt_3.4_Release_Notes
[2] http://resources.ovirt.org/releases/3.4/iso/ovirt-node-iso-3.0.4-1.0.2014...
--
Brian Proffitt - oVirt Community Manager
IRC: bkp @ #ovirt OFTC
10 years, 8 months
Re: [Users] engine-setup fails on nfs
by Alon Bar-Lev
----- Original Message -----
> From: "Alon Bar-Lev" <alonbl(a)redhat.com>
> To: "Jorick Astrego" <j.astrego(a)netbulae.eu>
> Sent: Friday, March 28, 2014 4:23:48 PM
> Subject: Re: [Users] engine-setup fails on nfs
>
>
>
> ----- Original Message -----
> > From: "Jorick Astrego" <j.astrego(a)netbulae.eu>
> > To: "Alon Bar-Lev" <alonbl(a)redhat.com>
> > Sent: Friday, March 28, 2014 4:12:47 PM
> > Subject: Re: [Users] engine-setup fails on nfs
> >
> > On Fri, 2014-03-28 at 09:08 -0400, Alon Bar-Lev wrote:
> >
> > >
> > > ----- Original Message -----
> > > > From: "Jorick Astrego" <j.astrego(a)netbulae.eu>
> > > > To: "users" <users(a)ovirt.org>
> > > > Sent: Friday, March 28, 2014 4:00:16 PM
> > > > Subject: Re: [Users] engine-setup fails on nfs
> > > >
> > > > Here you go:
> > > >
> > > > netstat -tulpn
> > >
> > > can you please try to restart nfs service manually? and see if that error
> > > resumes?
> >
> >
> > service nfs restart
> > Shutting down NFS daemon: [FAILED]
> > Shutting down NFS mountd: [ OK ]
> > Shutting down NFS quotas: [ OK ]
> > Shutting down NFS services: [ OK ]
> > Shutting down RPC idmapd: [FAILED]
> > Starting NFS services: [ OK ]
> > Starting NFS quotas: [ OK ]
> > Starting NFS mountd: [ OK ]
> > Starting NFS daemon: [ OK ]
> > Starting RPC idmapd: [ OK ]
> >
> >
>
> and another restart? do you see these errors again?
> can you please try to perform basic problem determination? this is not
> directly related to ovirt.
> the only relevant but is per what you found at /etc/sysconfig/nfs which can
> be empty file to restore defaults, but these defaults are ok for quite a
> while.
>
> thanks!
10 years, 8 months
[Users] engine-setup fails on nfs
by Jorick Astrego
Hi again,
While running engine-setup, it fails starting the nfs service.
Errors starting nfs service:
[ INFO ] Generating post install configuration file
'/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
--== SUMMARY ==--
[WARNING] Less than 16384MB of memory is available
SSH fingerprint:
8F:4E:D0:38:E3:4E:F3:E4:2D:7F:49:5D:40:C1:8A:39
Internal CA
27:B4:17:FD:BF:EA:35:3A:E6:A3:65:32:2D:CC:78:BA:9C:E0:AB:23
Web access is enabled at:
http://xxxx.xxxx.xxxx:80/ovirt-engine
https://xxxx.xxxx.xxxx:443/ovirt-engine
Please use the user "admin" and password specified in
order to login into oVirt Engine
--== END OF SUMMARY ==--
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Restarting nfs services
[ ERROR ] Failed to execute stage 'Closing up': Command
'/sbin/service' failed to execute
[ INFO ] Stage: Clean up
Log file is located
at /var/log/ovirt-engine/setup/ovirt-engine-setup-20140328114601.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 12725) is running...
nfsd dead but subsys locked
rpc.rquotad (pid 12721) is running...
from ovirt-engine-setup.log:
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Starting NFS daemon: [FAILED]
2014-03-28 11:51:14 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:446 execute-output: ('/sbin/service', 'nfs',
'start') stderr:
2014-03-28 11:51:14 DEBUG otopi.context
context._executeMethod:152 method exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line
142, in _executeMethod
method['method']()
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/system/nfs.py", line 232, in _closeup
state=state,
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line
188, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line
96, in _executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line
451, in execute
command=args[0],
RuntimeError: Command '/sbin/service' failed to execute
2014-03-28 11:51:14 ERROR otopi.context
context._executeMethod:161 Failed to execute stage 'Closing up':
Command '/sbin/service' failed to execute
Kind regards,
Jorick Astrego
Netbulae B.V.
10 years, 8 months
[Users] Glusterd en Glusterfsd services are not set to autostart on install
by Joop
Setting up oVirt with a separate cluster for storage using gluster we
discovered that the gluster services are not set to autostart for the
default runlevels.
Host(s) were clean Centos-6.5 hosts with the correct repositories and
then added from the webui. Hosts got all the packages and gluster was
correctly started for that session but after a reboot gluster isn't
running and the host was set to non-operational.
Is this a known problem and if not should I file a BZ but under which
category?
Regards,
Joop
10 years, 8 months
[Users] [QE] oVirt 3.3.5 RC status
by Sandro Bonazzola
Hi,
we're going to start composing 3.3.5 RC yum repository on 2014-04-02 09:00 UTC
following the published timeline [1]
A bug tracker is available at [2] and it shows no bugs blocking the release
The following is a list of the non-blocking bugs still open with target 3.3.5:
Whiteboard Bug ID Status Summary
gluster 1078200 NEW Unable to create a volume with bricks in the root partiti...
infra 1072819 NEW Accept old payload file syntax for backwards compatibility
Maintainers / Assignee:
Please build packages to be included in this RC *BEFORE* 2014-04-02 09:00 UTC. If not ready, latest nightly build will be used.
Please add the bugs to the tracker if you think that 3.3.5 should not be released without them fixed.
Please re-target all bugs you don't think that should block 3.3.5.
Bugs still targeted to 3.3.5 after RC announce will be re-targeted to 3.4.1.
For those who want to help testing the release, I suggest to add yourself to the testing page [3].
Nightly builds are available as described in [1]
Maintainers are welcomed to start filling release notes, the page has been created here [4]
[1] http://www.ovirt.org/OVirt_3.3.z_release-management
[2] http://bugzilla.redhat.com/1071867
[3] http://www.ovirt.org/Testing/Ovirt_3.3.5_testing
[4] http://www.ovirt.org/OVirt_3.3.5_release_notes
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 8 months