Re: [Users] What do you want to see in oVirt next?
by Sigbjorn Lie
On 01/03/2013 05:08 PM, Itamar Heim wrote:
> Hi Everyone,
>
> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
> find good/useful in oVirt, and what they would like to see
> improved/added in coming versions?
>
> Thanks,
> Itamar
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
I would also like to see single sign on capabilities (Kerberos) on the
WebAdmin and UserPortal when using IPA as authentication source.
Regards,
Siggi
11 years, 10 months
[Users] libvirt dependency vs. a way to have Solaris hosts in ovirt-engine
by Jiri Belka
Hi,
as you know Solaris (Open Indiana) has qemu-kvm although they don't use
libvirt (in fact libvirt tighted to much to Linux specifics).
If vdsm whould interact with qemu-kvm without libvirt it would open a
way to have Solaris hosts in ovirt-engine.
libvirt is another abstraction after vdsm layer, or if vdsm could use
"plugins" to interact with qemu-kvm (libvirt, "native", solaris-style),
it could use current mode, bypass libvirt and use solaris tools to talk
to their qemu-kvm.
jbelka
11 years, 10 months
[Users] centos issues..
by peter houseman
Hi,
I am currently trying to get ovirt engine and nodes up and running on
Centos 6u3. Unfortunately my lab does not have direct internet access so
the ovirt repo has been copied over from the
people.centos.org/hughesjr/ovirt31 repo as recommended by ovirt howto on
the Centos Wiki.
Everything installs fine with no dependency errors but as soon as I create
storage domains in the engine, warning messages appear in the vdsm.logs on
the hosts for the ISO and Data domains:
(retyped below)
"Warning... 390::3d::363::
Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace
72xxxxxx_volumeNS already registered"
plus similar warning message as above but ...."imageNS already registered"
plus
"Warning Storage.LVM::(reloadvgs) lvm vgs failed:5 Volume group "5bxxxx"
not found"
I have tried rebuilding the whole system and using NFS, ISCSI and Gluster
data domains but I still have the same warning messages.
Also, and I'm not sure if its associated, I have noticed info messages on
the ovirt engine log that:
"Autorecovering Storage domains is disabled, skipping"
Even though I am getting storage warning messages, the VMs are up and I can
log into them and run applications.
Any help appreciated.
Pete
11 years, 10 months
[Users] hypervisor install fails to detect proper CPU type
by Jim Kinney
I installed the F17 version of the hypervisor and was unable to join the
new node to the cluster. The failure was "wrong CPU type for cluster". The
system has an Intel Xeon x5660 CPU (Westmere family). There is another
system of the same class in the same cluster NOT using the hypervisor
(using a CentOS 6.3 install with dre-repo).
I reinstalled the failing system with CentOS and all is well now joining
the system to the cluster.
--
--
James P. Kinney III
*
*Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*
http://electjimkinney.org
http://heretothereideas.blogspot.com/
*
11 years, 10 months
Re: [Users] What do you want to see in oVirt next?
by Itamar Heim
On 01/03/2013 11:26 PM, Alexandru Vladulescu wrote:
>
> I would like to add a request for the new upper coming version 3.2 if
> possible:
Hi Alexandru,
just to note my question was for the post 3.2 version, as 3.2 is
basically done.
>
> Although some of you use spice instead of VNC, as I am an Ubuntu user on
> my desktop and laptop, spice protocol is not working withing my OS; even
> if I tried to build it from source, search unofficial deb packs or
> convert if from rpm packages. I know spice is strongly supported on the
> Fedora community, but I, myself on the server side work on RH Enterprise
> and Centos, but as for desktop use, I have a very hard time make the
> spice plugin for firefox to work.
>
> Therefore for my solution I had setup a VNC reflector + some shell
> automation to make it work between 2 different subnets (one inside and
> one outside) -- this, somehow, adding to the initial scope.
>
> This would have been much more easier to have a VNC proxy inside the
> ovirt engine function, from where to make the necessary setups and
> assignments of the console to each VM, or even though I might sound
> funny to make a solution as vrde on Vbox, because works damn great and
> it's easy to setup or change.
>
> Last question, If I might ask, when is the 3.2 planned to be released
> (aprox)
last update was here:
http://lists.ovirt.org/pipermail/users/2013-January/011454.html
Thanks,
Itamar
11 years, 10 months
Re: [Users] Fwd: Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly
by Matthew Booth
> [Users] Successfully virt-v2v from CentOS 6_3 VM to Ovirt 3_2 nightly.eml
>
> Subject:
> [Users] Successfully virt-v2v from CentOS 6.3 VM to Ovirt 3.2 nightly
> From:
> Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
> Date:
> 09/01/13 15:55
>
> To:
> users <users(a)ovirt.org>
>
>
> Hello,
> on my oVirt Host configured with F18 and all-in-one and ovirt-nightly as of
> ovirt-engine-3.2.0-1.20130107.git1a60fea.fc18.noarch
>
> I was able to import a CentOS 5.8 VM coming from a CentOS 6.3 host.
>
> The oVirt node server is the same where I'm unable to run a newly
> created WIndows 7 32bit vm...
> See http://lists.ovirt.org/pipermail/users/2013-January/011390.html
>
> In this thread I would like to report about successful import phases and
> some doubts about:
> 1) no password requested during virt-v2v
> 2) no connectivity in guest imported.
>
> On CentOS 6.3 host
> # virt-v2v -o rhev -osd 10.4.4.59:/EXPORT --network ovirtmgmt c56cr
> c56cr_001: 100%
> [===================================================================================]D
> 0h02m17s
> virt-v2v: c56cr configured with virtio drivers.
>
> ---> I would expect to be asked for the password of a privileged user in
> oVirt infra, instead the export process started without any prompt.
> Is this correct?
> In my opinion in this case it could be a security concern....
virt-v2v doesn't require a password here because it connects directly to
your NFS server. This lack of security is inherent in NFS(*). This is a
limitation you must manage within your oVirt deployment. Ideally you
would treat your NFS network as a SAN and control access to it accordingly.
* There is no truth in the rumour that this stands for No F%*$&"£g
Security ;)
> Import process has begun for VM(s): c56cr.
> You can check import status in the 'Events' tab of the specific
> destination storage domain, or in the main 'Events' tab
>
> ---> regarding the import status, the "specific destination storage
> domain" would be my DATA domain, correct?
> Because I see nothing in it and nothing in export domain.
> Instead I correctly see in main events tab of the cluster these two messages
>
> 2013-Jan-09, 16:16 Starting to import Vm c56cr to Data Center Poli,
> Cluster Poli1
> 2013-Jan-09, 16:18 Vm c56cr was imported successfully to Data Center
> Poli, Cluster Poli1
>
> SO probably the first option should go away....?
I'm afraid I didn't follow this. Which option?
> I was then able to power on and connect via vnc to the console.
> But I noticed it has no connectivity with its gateway
>
> Host is on vlan 65
> (em3 + em3.65 cofigured)
>
> host has
> 3: em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovirtmgmt state UP qlen 1000
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3add/64 scope link
> valid_lft forever preferred_lft forever
> ...
> 6: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet6 fe80::21c:c4ff:feab:3add/64 scope link
> valid_lft forever preferred_lft forever
> 7: em3.65@em3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP
> link/ether 00:1c:c4:ab:3a:dd brd ff:ff:ff:ff:ff:ff
> inet 10.4.4.59/24 <http://10.4.4.59/24> brd 10.4.4.255 scope global
> em3.65
> inet6 fe80::21c:c4ff:feab:3add/64 scope link
> valid_lft forever preferred_lft forever
> ...
> 13: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master ovirtmgmt state UNKNOWN qlen 500
> link/ether fe:54:00:d3:8f:a3 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc54:ff:fed3:8fa3/64 scope link
> valid_lft forever preferred_lft forever
>
> [g.cecchi@f18aio ~]$ ip route list
> default via 10.4.4.250 dev em3.65
> 10.4.4.0/24 <http://10.4.4.0/24> dev em3.65 proto kernel scope link
> src 10.4.4.59
>
> ovirtmgmt is tagged in datacenter Poli1
>
> guest is originally configured (and it maintained this) on bridged
> vlan65 on CentOS 63 host. Its parameters
>
> eth0 with
> ip 10.4.4.53 and gw 10.4.4.250
>
> from webadmin pov it seems ok. see also this screenshot
> https://docs.google.com/open?id=0BwoPbcrMv8mvbENvR242VFJ2M1k
>
> any help will be appreciated.
> do I have to enable some kind of routing not enabled by default..?
virt-v2v doesn't update IP configuration in the guest. This means that
the target guest must be on the same ethernet segment as the source, or
it will have to be manually reconfigured after conversion.
Matt
--
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team
GPG ID: D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
11 years, 10 months
[Users] Testing High Availability and Power outages
by Alexandru Vladulescu
This is a multi-part message in MIME format.
--------------070301050308080004000702
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
Today, I started testing on my Ovirt 3.1 installation (from dreyou
repos) running on 3 x Centos 6.3 hypervisors the High Availability
features and the fence mechanism.
As yesterday, I have reported in a previous email thread, that the
migration priority queue cannot be increased (bug) in this current
version, I decided to test what the official documentation says about
the High Availability cases.
This will be a disaster case scenarios to suffer from if one hypervisor
has a power outage/hardware problem and the VMs running on it are not
migrating on other spare resources.
In the official documenation from ovirt.org it is quoted the following:
/High availability /
//
/Allows critical VMs to be restarted on another host in the event of
hardware failure with three levels of priority, taking into account
resiliency policy. /
//
* /Resiliency policy to control high availability VMs at the cluster
level. /
* /Supports application-level high availability with supported fencing
agents. /
As well as in the Architecture description:
/High Availability - restart guest VMs from failed hosts automatically
on other hosts/
So the testing went like this -- One VM running a linux box, having the
check box "High Available" and "Priority for Run/Migration queue:" set
to Low. On Host we have the check box to "Any Host in Cluster", without
"Allow VM migration only upon Admin specific request" checked.
My environment:
Configuration : 2 x Hypervisors (same cluster/hardware configuration) ;
1 x Hypervisor + acting as a NAS (NFS) server (different
cluster/hardware configuration)
Actions: Went and cut-off the power from one of the hypervisors from the
2 node clusters, while the VM was running on. This would translate to a
power outage.
Results: The hypervisor node that suffered from the outage is showing in
Hosts tab as Non Responsive on Status, and the VM has a question mark
and cannot be powered off or nothing (therefore it's stuck).
In the Log console in GUI, I get:
Host Hyper01 is non-responsive.
VM Web-Frontend01 was set to the Unknown status.
There is nothing I could I could do besides clicking on the Hyper01
"Confirm Host as been rebooted", afterwards the VM starts on the Hyper02
with a cold reboot of the VM.
The Log console changes to:
Vm Web-Frontend01 was shut down due to Hyper01 host reboot or manual fence
All VMs' status on Non-Responsive Host Hyper01 were changed to 'Down' by
admin@internal
Manual fencing for host Hyper01 was started.
VM Web-Frontend01 was restarted on Host Hyper02
I would like you approach on this problem, reading the documentation &
features pages on the official website, I suppose that this would have
been an automatically mechanism working on some sort of a vdsm & engine
fencing action. Am I missing something regarding it ?
Thank you for your patience reading this.
Regards,
Alex.
--------------070301050308080004000702
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
Hi,<br>
<br>
<br>
Today, I started testing on my Ovirt 3.1 installation (from dreyou
repos) running on 3 x Centos 6.3 hypervisors the High Availability
features and the fence mechanism.<br>
<br>
As yesterday, I have reported in a previous email thread, that the
migration priority queue cannot be increased (bug) in this current
version, I decided to test what the official documentation says
about the High Availability cases. <br>
<br>
This will be a disaster case scenarios to suffer from if one
hypervisor has a power outage/hardware problem and the VMs running
on it are not migrating on other spare resources.<br>
<br>
<br>
In the official documenation from ovirt.org it is quoted the
following:<br>
<h3> <span class="mw-headline" id="High_availability"> <font
color="#333399"><i><small>High availability </small></i></font></span></h3>
<font color="#333399"><i><small>
</small></i></font>
<p><font color="#333399"><i><small>Allows critical VMs to be
restarted on another host in the event of hardware failure
with three levels of priority, taking into account
resiliency policy.
</small></i></font></p>
<font color="#333399"><i><small>
</small></i></font>
<ul>
<li><font color="#333399"><i><small> Resiliency policy to control
high availability VMs at the cluster level.
</small></i></font></li>
<li><font color="#333399"><i><small> Supports application-level
high availability with supported fencing agents.
</small></i></font></li>
</ul>
<br>
As well as in the Architecture description:<br>
<font color="#333399"><br>
<small><i>High Availability - restart guest VMs from failed hosts
automatically on other hosts</i></small></font><br>
<br>
<br>
<br>
So the testing went like this -- One VM running a linux box, having
the check box "High Available" and "Priority for Run/Migration
queue:" set to Low. On Host we have the check box to "Any Host in
Cluster", without "Allow VM migration only upon Admin specific
request" checked.<br>
<br>
<br>
<br>
My environment:<br>
<br>
<br>
Configuration : 2 x Hypervisors (same cluster/hardware
configuration) ; 1 x Hypervisor + acting as a NAS (NFS) server
(different cluster/hardware configuration)<br>
<br>
Actions: Went and cut-off the power from one of the hypervisors from
the 2 node clusters, while the VM was running on. This would
translate to a power outage.<br>
<br>
Results: The hypervisor node that suffered from the outage is
showing in Hosts tab as Non Responsive on Status, and the VM has a
question mark and cannot be powered off or nothing (therefore it's
stuck).<br>
<br>
In the Log console in GUI, I get: <br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">Host Hyper01 is
non-responsive.</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">VM Web-Frontend01
was set to the Unknown status.</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<br>
There is nothing I could I could do besides clicking on the Hyper01
"Confirm Host as been rebooted", afterwards the VM starts on the
Hyper02 with a cold reboot of the VM.<br>
<br>
The Log console changes to:<br>
<br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">Vm Web-Frontend01
was shut down due to Hyper01 host reboot or manual fence</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">All VMs' status
on Non-Responsive Host Hyper01 were changed to 'Down' by
admin@internal</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">Manual fencing
for host Hyper01 was started.</span><br>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<span style="color: rgb(255, 255, 255); font-family: 'Arial Unicode
MS', Arial, sans-serif; font-size: small; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: 26px; orphans: 2; text-align: start; text-indent:
0px; text-transform: none; white-space: nowrap; widows: 2;
word-spacing: 0px; -webkit-text-size-adjust: auto;
-webkit-text-stroke-width: 0px; background-color: rgb(102, 102,
102); display: inline !important; float: none; ">VM Web-Frontend01
was restarted on Host Hyper02</span><br>
<br>
<br>
I would like you approach on this problem, reading the documentation
& features pages on the official website, I suppose that this
would have been an automatically mechanism working on some sort of a
vdsm & engine fencing action. Am I missing something regarding
it ?<br>
<br>
<br>
Thank you for your patience reading this.<br>
<br>
<br>
Regards,<br>
Alex.<br>
<br>
<br>
<br>
</body>
</html>
--------------070301050308080004000702--
11 years, 10 months
[Users] ovirt fails to attach gluster volume
by Jithin Raju
Hi All,
I have a fresh installation of ovirt 3.1 with Datacenter type posix.
ovirt+ 1 node.
I created a gluster volume and able to mount it locally.
mount -t glusterfs fig:/vol1 /rhev/data-center/mnt/fig:_vol1
df -h gives:
fig:/vol1 50G 3.9G 43G 9%
/rhev/data-center/mnt/fig:_vol1
looks fine.
when i try the same from ovirt GUI i receieve an error failed to add
storage domain.
GUI parameter passed:
nodename:/volume_name
VFS type:glusterfs
mount options:vers=3 (tried empty also).
I have reported the same one week back and I got replies like its a bug.
I would like to know is there a work around .
vdsm log:
Thread-2474::DEBUG::2013-01-11
12:26:26,370::task::588::TaskManager.Task::(_updateState)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::moving from state init ->
state preparing
Thread-2474::INFO::2013-01-11
12:26:26,371::logUtils::37::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection(domType=6,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': 'fig:/vol1', 'iqn': '', 'portal': '', 'user': '', 'vfs_type':
'glusterfs', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000'}], options=None)
Thread-2474::INFO::2013-01-11
12:26:26,371::logUtils::39::dispatcher::(wrapper) Run and protect:
validateStorageServerConnection, Return response: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-2474::DEBUG::2013-01-11
12:26:26,371::task::1172::TaskManager.Task::(prepare)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::finished: {'statuslist':
[{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-2474::DEBUG::2013-01-11
12:26:26,371::task::588::TaskManager.Task::(_updateState)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::moving from state preparing ->
state finished
Thread-2474::DEBUG::2013-01-11
12:26:26,372::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-2474::DEBUG::2013-01-11
12:26:26,372::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-2474::DEBUG::2013-01-11
12:26:26,372::task::978::TaskManager.Task::(_decref)
Task=`efb3b3cc-5645-4f87-92cb-b9ecb8ccce48`::ref 0 aborting False
Thread-2475::DEBUG::2013-01-11
12:26:26,410::BindingXMLRPC::156::vds::(wrapper) [135.250.76.71]
Thread-2475::DEBUG::2013-01-11
12:26:26,411::task::588::TaskManager.Task::(_updateState)
Task=`f377d9bb-c357-49f9-8aef-483f0525bec9`::moving from state init ->
state preparing
Thread-2475::INFO::2013-01-11
12:26:26,411::logUtils::37::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=6,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': 'fig:/vol1', 'iqn': '', 'portal': '', 'user': '', 'vfs_type':
'glusterfs', 'password': '******', 'id':
'c200ffa7-a334-4d8d-b43e-3f25f3e8a84c'}], options=None)
Thread-2475::DEBUG::2013-01-11
12:26:26,419::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
/usr/bin/mount -t glusterfs fig:/vol1 /rhev/data-center/mnt/fig:_vol1' (cwd
None)
Thread-2475::ERROR::2013-01-11
12:26:26,508::hsm::1932::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
self._mount.mount(self.options, self._vfsType)
File "/usr/share/vdsm/storage/mount.py", line 190, in mount
return self._runcmd(cmd, timeout)
File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
raise MountError(rc, ";".join((out, err)))
MountError: (1, 'Mount failed. Please check the log file for more
details.\n;ERROR: failed to create logfile
"/var/log/glusterfs/rhev-data-center-mnt-fig:_vol1.log" (Permission
denied)\nERROR: failed to open logfile
/var/log/glusterfs/rhev-data-center-mnt-fig:_vol1.log\n')
engine log:
2013-01-11 12:28:21,014 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] START, V
alidateStorageServerConnectionVDSCommand(vdsId =
ee2b26ba-5bb1-11e2-815e-e4115b978434, storagePoolId =
00000000-0000-0000-0000-000000000000, storageType = PO
SIXFS, connectionList = [{ id: null, connection: fig:/vol1 };]), log id:
658913d
2013-01-11 12:28:21,046 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] FINISH,
ValidateStorageServerConnectionVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id: 658913d
2013-01-11 12:28:21,053 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] Running command: AddStor
ageServerConnectionCommand internal: false. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-11 12:28:21,056 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] START, ConnectStora
geServerVDSCommand(vdsId = ee2b26ba-5bb1-11e2-815e-e4115b978434,
storagePoolId = 00000000-0000-0000-0000-000000000000, storageType =
POSIXFS, connectionList
= [{ id: c200ffa7-a334-4d8d-b43e-3f25f3e8a84c, connection: fig:/vol1 };]),
log id: 322d95a9
2013-01-11 12:28:21,187 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(ajp--0.0.0.0-8009-4) [29437bcd] FINISH, ConnectStor
ageServerVDSCommand, return: {c200ffa7-a334-4d8d-b43e-3f25f3e8a84c=477},
log id: 322d95a9
2013-01-11 12:28:21,190 ERROR
[org.ovirt.engine.core.bll.storage.POSIXFSStorageHelper]
(ajp--0.0.0.0-8009-4) [29437bcd] The connection with details fig:/vol1
failed because of error code 477 and error message is: 477
2013-01-11 12:28:21,220 WARN
[org.ovirt.engine.core.dal.job.ExecutionMessageDirector]
(ajp--0.0.0.0-8009-2) [522a5ac5] The message key AddPosixFsStorageDoma
in is missing from bundles/ExecutionMessages
2013-01-11 12:28:21,242 INFO
[org.ovirt.engine.core.bll.storage.AddPosixFsStorageDomainCommand]
(ajp--0.0.0.0-8009-2) [522a5ac5] Running command: AddPosixFs
StorageDomainCommand internal: false. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: System
2013-01-11 12:28:21,253 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(ajp--0.0.0.0-8009-2) [522a5ac5] START, CreateStorage
DomainVDSCommand(vdsId = ee2b26ba-5bb1-11e2-815e-e4115b978434,
storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static@9c3f6ce6,
ar
gs=fig:/vol1), log id: 6a3a31b8
2013-01-11 12:28:21,776 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-2) [522a5ac5] Failed in CreateStorageDomainVDS
method
2013-01-11 12:28:21,777 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-2) [522a5ac5] Error code StorageDomainFSNotMou
nted and error message VDSGenericException: VDSErrorException: Failed to
CreateStorageDomainVDS, error = Storage domain remote path not mounted:
('/rhev/data
-center/mnt/fig:_vol1',)
2013-01-11 12:28:21,780 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--0.0.0.0-8009-2) [522a5ac5] Command org.ovirt.engine.core.vd
sbroker.vdsbroker.CreateStorageDomainVDSCommand return value
Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus Class Name:
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode 360
mMessage Storage domain remote path not mounted:
('/rhev/data-center/mnt/fig:_vol1',)
Thanks,
Jithin
11 years, 10 months
[Users] ovirt-cli 3.2.0.9 released
by Michael Pasternak
* Sun Jan 13 2013 Michael Pasternak <mpastern(a)redhat.com> - 3.2.0.9-1
- ovirt-cli DistributionNotFound exception on f18 #881011
- adding to help message ovirt-shell configuration details #890800
- wrong error when passing empty collection based option #890525
- wrong error when passing empty kwargs #891080
For more details can be found at [1].
[1] http://wiki.ovirt.org/Cli-changelog
--
Michael Pasternak
RedHat, ENG-Virtualization R&D
11 years, 10 months