Re: [ovirt-users] iLO2
by Eli Mesika
----- Original Message -----
> From: "Eriks Goodwin" <eriks(a)connectcombpo.com>
> To: "Eli Mesika" <emesika(a)redhat.com>
> Sent: Tuesday, December 29, 2015 4:13:11 AM
> Subject: Re: [ovirt-users] iLO2
>
> [root@n001 ~]# /usr/sbin/fence_ipmilan --ip=10.0.1.104 --username=ovirtmgmt
> --password=REDACTED -v --action=status --lanplus
Please try again with
# /usr/sbin/fence_ipmilan --ip=10.0.1.104 --username=ovirtmgmt --password=REDACTED -v --action=status --lanplus -T 4
>
> Executing: /usr/bin/ipmitool -I lanplus -H 10.0.1.104 -U ovirtmgmt -P [set]
> -p 623 -L ADMINISTRATOR chassis power status
>
> Connection timed out
>
> [root@n001 ~]#
>
>
> ----- On Dec 22, 2015, at 8:48 AM, Eli Mesika <emesika(a)redhat.com> wrote:
>
6 years, 5 months
Re: [ovirt-users] Not able to attach my storage
by Nir Soffer
בתאריך 3 בינו׳ 2016 10:26 לפנה״צ, "Michael Cooper" <mcooper(a)coopfire.com>
כתב:
(Adding users list)
>
> This from my engine.log
(Snipped log)
Next time please attach log files, dumping partial logs in the message body
is not very useful.
>
> I have attached 2 screenshots as well for your review. Let me know I have
read through the log but I am new to oVirt so I am not sure what I am
looking for yet.
>
In the screenshot we can see that you already have iso domain attached
(CF_ISO). Seems that the system does not support more than one attached iso
domain.
Why do you need multiple iso domains attached to the same dc?
>
>
> On Sat, Jan 2, 2016 at 11:30 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Sat, Jan 2, 2016 at 12:46 PM, Michael Cooper <mcooper(a)coopfire.com>
wrote:
>> > Hello Everyone,
>> >
>> > I wondering why I cannot attach my iso_domain to my Data
Center.
>> > When I try to attach it says no Valid DataCenters. Why is this
happening?
>> > Where should I start looking for the resolution?
>>
>> Can you describe your data center? do you have active hosts? active
>> storage domain?
>>
>> Can you attach engine.log showing this error?
>>
>> Nir
>>
>> >
>> > Thanks,
>> >
>> > --
>> > Michael A Cooper
>> > Linux Certified
>> > Zerto Certified
>> > http://www.coopfire.com
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
>> >
>
>
>
>
> --
> Michael A Cooper
> Linux Certified
> Zerto Certified
> http://www.coopfire.com
6 years, 5 months
Re: [ovirt-users] Users Digest, Vol 52, Issue 1
by Michael Cooper
Hello Everyone,
I wondering why I cannot attach my iso_domain to my Data Center.
When I try to attach it says no Valid DataCenters. Why is this happening?
Where should I start looking for the resolution?
Thanks,
--
Michael A Cooper
Linux Certified
Zerto Certified
http://www.coopfire.com
6 years, 5 months
how shutdown a host
by alireza sadeh seighalan
hi everyone
how can i shutdown a host acording an standard? i want to shutdown or
reboot hosts for maintenance purposes (for example host 2-6). thanks in
advance
6 years, 5 months
host status "Non Operational" - how to diagnose & fix?
by Will Dennis
I have had one of my hosts go into the state “Non Operational” after I rebooted it… I also noticed that in the oVirt webadmin UI, the NIC that’s used in the ‘ovirtmgmt’ network is showing “down”, but in Linux the NIC is operational and up, as is the ‘ovirtmgmt’ bridge…
[root@ovirt-node-02 ~]# ip link sh up
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bond0: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
3: enp4s0f0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
4: enp4s0f1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 state DOWN mode DEFAULT qlen 1000
link/ether 00:15:17:7b:e9:b0 brd ff:ff:ff:ff:ff:ff
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
7: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
link/ether 00:21:85:35:08:4c brd ff:ff:ff:ff:ff:ff
What should I take a look at first?
6 years, 5 months
Configuring another interface for trunked (tagged) VM traffic
by Will Dennis
Hi all,
Taking the next step on configuring my newly-established oVirt cluster, and that would be to set up a trunk (VLAN tagged) connection to each cluster host (there are 3) for VM traffic. What I’m looking at is akin to setting up vSwitches on VMware, except I have never done this on a VMware cluster, just on individual hosts…
Anyhow, I have the following NICs available on my three hosts (conveniently, they are the exact same hardware platform):
ovirt-node-01 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
ovirt-node-02 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
ovirt-node-03 | success | rc=0 >>
3: enp4s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
4: enp4s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
5: enp12s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP mode DEFAULT qlen 1000
6: enp12s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
As you may see, I am using the ‘enp12s0f0’ interface on each host for the ‘ovirtmgmt’ bridge. This network carries the admin traffic as well as Gluster distributed filesystem traffic, but I now want to establish a separate link to each host for VM traffic. The ‘ovirtmgmt’ bridge is NOT trunked/tagged, only a single VLAN is used. For the VM traffic, I’d like to use the ‘enp4s0f0’ interface on each host, and tie them into a logical network named “vm-traffic” (or the like) and make that a trunked/tagged interface.
Are there any existing succinct instructions on how to do this? I have been reading thru the oVirt Admin Manual’s “Logical Networks” section (http://www.ovirt.org/OVirt_Administration_Guide#Logical_Network_Tasks) but it hasn’t “clicked” in my mind yet...
Thanks,
Will
6 years, 5 months
SPM
by Fernando Fuentes
Team,
I noticed that my SPM moved to another host which was odd because I have
a set SPM.
Somehow when that happen two of my hosts went down and all my vms when
in pause state.
The oddity behind all this is that my primary storage which has allways
been my SPM was online without any issues..
What could of have cause that? and is there a way prevent from the SPM
migrating unless there is an issue?
--
Fernando Fuentes
ffuentes(a)txweather.org
http://www.txweather.org
6 years, 5 months
oVirt hosted engine agent and broker duplicate logs to syslog
by Aleksey Chudov
Hi,
After upgrade from 3.6.0 to 3.6.1 agent and broker duplicate their logs to
syslog. So, the same messages logged twice to files in
/var/log/ovirt-hosted-engine-ha/ directory and to /var/log/messages file.
Agent and broker configuration files remain the same for 3.5, 3.6.0 and
3.6.1 and there is not such logs duplication in 3.5 and 3.6.0.
Is it a bug or expected behavior?
OS is CentOS 7.2
# rpm -qa 'ovirt*'
ovirt-vmconsole-1.0.0-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.0-1.el7.centos.noarch
ovirt-host-deploy-1.4.1-1.el7.centos.noarch
ovirt-hosted-engine-ha-1.3.3.5-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.0.3-1.el7.centos.noarch
ovirt-release36-002-2.noarch
ovirt-setup-lib-1.0.0-1.el7.centos.noarch
ovirt-hosted-engine-setup-1.3.1.3-1.el7.centos.noarch
# cat /etc/ovirt-hosted-engine-ha/agent-log.conf
[loggers]
keys=root
[handlers]
keys=syslog,logfile
[formatters]
keys=long,sysform
[logger_root]
level=INFO
handlers=syslog,logfile
propagate=0
[handler_syslog]
level=ERROR
class=handlers.SysLogHandler
formatter=sysform
args=('/dev/log', handlers.SysLogHandler.LOG_USER)
[handler_logfile]
class=logging.handlers.TimedRotatingFileHandler
args=('/var/log/ovirt-hosted-engine-ha/agent.log', 'd', 1, 7)
level=DEBUG
formatter=long
[formatter_long]
format=%(threadName)s::%(levelname)s::%(asctime)s::%(module)s::%(lineno)d::%(name)s::(%(funcName)s)
%(message)s
[formatter_sysform]
format=ovirt-ha-agent %(name)s %(levelname)s %(message)s
datefmt=
Aleksey
6 years, 5 months