hosted-engine --deploy fails
by Andreas Mather
Hi All!
Just tried to install the hosted engine on a fresh CentOS 6.6: beside
setting up a gluster cluster, I just added the repo and installed
hosted-engine-setup package. Otherwise it's just a very minimal
installation. hosted-engine deploy failed immediately. The issue seems to
be the same as described here
http://lists.ovirt.org/pipermail/users/2014-October/028461.html but that
conversation didn't continue after the reporting user was asked for
additional details.
Failed installation:
[root@vhost1 ~]# hosted-engine --deploy
[ INFO ] Stage: Initializing
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141223224236-y9ttk4.log
Version: otopi-1.3.0 (otopi-1.3.0-1.el6)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ ERROR ] Failed to execute stage 'Environment setup': Command
'/sbin/service' failed to execute
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Answer file '/etc/ovirt-hosted-engine/answers.conf' has been
updated
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
Packages:
[root@vhost1 ~]# rpm -qa|grep hosted-engine
ovirt-hosted-engine-setup-1.2.1-1.el6.noarch
ovirt-hosted-engine-ha-1.2.4-1.el6.noarch
[root@vhost1 ~]# rpm -qa|grep vdsm
vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-4.16.7-1.gitdb83943.el6.noarch
vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-4.16.7-1.gitdb83943.el6.x86_64
vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
So here's the output that was requested in the other thread. Hope someone
can help me here. Thanks!
[root@vhost1 ~]# find /var/lib/vdsm/persistence
/var/lib/vdsm/persistence
[root@vhost1 ~]# find /var/run/vdsm/netconf
find: `/var/run/vdsm/netconf': No such file or directory
[root@vhost1 ~]# ip l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2:
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000 link/ether 00:25:90:d8:0a:b0 brd ff:ff:ff:ff:ff:ff 4: ;vdsmdummy;:
<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether
62:e5:28:13:9d:ba brd ff:ff:ff:ff:ff:ff 5: bond0:
<BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether
00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff virsh -r net
[root@vhost1 ~]# virsh -r net
error: unknown command: 'net'
[root@vhost1 ~]# virsh -r net-list
Name State Autostart Persistent
--------------------------------------------------
;vdsmdummy; active no no
[root@vhost1 ~]# vdsm-tool restore-nets
Traceback (most recent call last):
File "/usr/share/vdsm/vdsm-restore-net-config", line 137, in <module>
restore()
File "/usr/share/vdsm/vdsm-restore-net-config", line 123, in restore
unified_restoration()
File "/usr/share/vdsm/vdsm-restore-net-config", line 57, in
unified_restoration
_inRollback=True)
File "/usr/share/vdsm/network/api.py", line 616, in setupNetworks
netinfo._libvirtNets2vdsm(libvirt_nets)))
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 822, in get
d['nics'][dev.name] = _nicinfo(dev, paddr, ipaddrs)
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 653, in
_nicinfo
info = _devinfo(link, ipaddrs)
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 681, in
_devinfo
ipv4addr, ipv4netmask, ipv4addrs, ipv6addrs = getIpInfo(link.name,
ipaddrs)
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 370, in
getIpInfo
ipv4addr, prefix = addr['address'].split('/')
ValueError: need more than 1 value to unpack
Traceback (most recent call last):
File "/usr/bin/vdsm-tool", line 209, in main
return tool_command[cmd]["command"](*args)
File "/usr/lib/python2.6/site-packages/vdsm/tool/restore_nets.py", line
36, in restore_command
restore()
File "/usr/lib/python2.6/site-packages/vdsm/tool/restore_nets.py", line
45, in restore
raise EnvironmentError('Failed to restore the persisted networks')
EnvironmentError: Failed to restore the persisted networks
following is mentioned in the original thread, but should help other
googlers:
ovirt-hosted-engine-setup.log (first attempt):
2014-12-23 22:04:38 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/service', 'vdsmd', 'start')
stdout:
Starting multipathd daemon: [ OK ]
Starting rpcbind: [ OK ]
Starting ntpd: [ OK ]
Loading the softdog kernel module: [ OK ]
Starting wdmd: [ OK ]
Starting sanlock: [ OK ]
supervdsm start[ OK ]
Starting iscsid: [ OK ]
[ OK ]
vdsm: Running mkdirs
vdsm: Running configure_coredump
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
libvirt is already configured for vdsm
vdsm: Running validate_configuration
SUCCESS: ssl configured to true. No conflicts
vdsm: Running prepare_transient_repository
vdsm: Running syslog_available
vdsm: Running nwfilter
vdsm: Running dummybr
vdsm: Running load_needed_modules
vdsm: Running tune_system
vdsm: Running test_space
vdsm: Running test_lo
vdsm: Running unified_network_persistence_upgrade
vdsm: stopped during execute unified_network_persistence_upgrade task (task
returned with error code 1).
vdsm start[FAILED]
2014-12-23 22:04:38 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/service', 'vdsmd', 'start')
stderr:
initctl: Job is already running: libvirtd
libvirt: Network Filter Driver error : Network filter not found: no
nwfilter with matching name 'vdsm-no-mac-spoofing'
2014-12-23 22:04:38 DEBUG otopi.context context._executeMethod:152 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
line 155, in _late_setup
state=True
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 188, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 96, in
_executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
execute
command=args[0],
RuntimeError: Command '/sbin/service' failed to execute
2014-12-23 22:04:38 ERROR otopi.context context._executeMethod:161 Failed
to execute stage 'Environment setup': Command '/sbin/service' failed to
execute
ovirt-hosted-engine-setup.log (further attempts):
2014-12-23 22:42:40 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/service', 'vdsmd', 'start')
stdout:
vdsm: Running mkdirs
vdsm: Running configure_coredump
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
libvirt is already configured for vdsm
vdsm: Running validate_configuration
SUCCESS: ssl configured to true. No conflicts
vdsm: Running prepare_transient_repository
vdsm: Running syslog_available
vdsm: Running nwfilter
vdsm: Running dummybr
vdsm: Running load_needed_modules
vdsm: Running tune_system
vdsm: Running test_space
vdsm: Running test_lo
vdsm: Running unified_network_persistence_upgrade
vdsm: stopped during execute unified_network_persistence_upgrade task (task
returned with error code 1).
vdsm start[FAILED]
2014-12-23 22:42:40 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/service', 'vdsmd', 'start')
stderr:
initctl: Job is already running: libvirtd
2014-12-23 22:42:40 DEBUG otopi.context context._executeMethod:152 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
line 155, in _late_setup
state=True
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 188, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 96, in
_executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
execute
command=args[0],
RuntimeError: Command '/sbin/service' failed to execute
2014-12-23 22:42:40 ERROR otopi.context context._executeMethod:161 Failed
to execute stage 'Environment setup': Command '/sbin/service' failed to
execute
----
Andreas
9 years, 11 months
ERROR 'no free file handlers in pool' while creating VM from template
by Tiemen Ruiten
Hello,
I ran into a nasty problem today when creating a new, cloned VM from a
template (one virtual 20 GBdisk) on our two-node oVirt cluster: on the
node where I started a VM creation job, load skyrocketed and some VMs
stopped responding until and after the job failed. Everything recovered
without intervention, but this obviously shouldn't happen. I have
attached the relevant vdsm log file. The button to create the VM was
pressed around 11:17, the first error in the vdsm log is at 11:23:58.
The ISO domain is a gluster volume exposed via NFS, the storage domain
for the VM's is also a gluster volume. The underlying filesystem is ZFS.
The hypervisor nodes are full CentOS 6 installs.
I'm guessing the 'no free file handlers in pool' in the vdsm log file is
key here. What can I do to prevent this from happening again? Apart from
not creating new VMs of course :)
Tiemen
9 years, 11 months
Re: [ovirt-users] 3.5 live merge findings and mysteries [was Re: Simple way to activate live merge in FC20 cluster]
by Gianluca Cecchi
On Fri, Dec 12, 2014 at 4:32 PM, Itamar Heim <iheim(a)redhat.com> wrote:
>
> On 11/21/2014 09:53 AM, Gianluca Cecchi wrote:
>
>> So the official statement is this one at:
>> http://www.ovirt.org/OVirt_3.5_Release_Notes
>>
>> Live Merge
>> If an image has one or more snapshots, oVirt 3.5's merge command will
>> combine the data of one volume into another. Live merges can be
>> performed with data is pulled from one snapshot into another snapshot.
>> The engine can merge multiple disks at the same time and each merge can
>> independently fail or succeed in each operation.
>>
>> I think we should remove the part above, or at least have some of the
>> developers to better clarify it.
>> The feature in my opinion is very important and crucial for oVirt/RHEV
>> because it is able to almost-fill the gap with VMware, especially in
>> development environments, where having flexibility on snapshots
>> management is very important and could be a starting point to have
>> greater user base familiarize with the product and adopt it.
>>
>> So these are my findings and all combinations and none of them was able
>> to provide live merge....
>> Could anyone tell me where I'm failing? Or correct release notes?
>>
>> 1) Environment with All-In-One F20
>>
>> installing oVirt AIO on F20 automatically gives the virt-preview repo
>> through the ovirt-3.5-dependencies.repo file, but only for libvirt*
>> packages:
>>
>> the same server is the engine and the hypervisor
>>
>> [root@tekkaman qemu]# rpm -q libvirt
>> libvirt-1.2.9.1-1.fc20.x86_64
>>
>> [root@tekkaman qemu]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'true'
>>
>> but
>> [root@tekkaman qemu]# rpm -q qemu
>> qemu-1.6.2-10.fc20.x86_64
>>
>> So that tryng live merge it initially start but you get this in vdsm.log:
>> libvirtError: unsupported configuration: active commit not supported
>> with this QEMU binary
>>
>>
>> 2) Another seaparate environment with a dedicated 3.5 engine f20 engine
>> and 4 test cases tried
>>
>> a) ovirt node installed and put in a dedicated cluster
>> latest available seems ovirt-node-iso-3.5.0.ovirt35.20140912.el6.iso
>> from 3.5 rc test days
>>
>> At the end of oVirt Node install and activation in engine:
>> [root@ovnode01 ~]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'false'
>>
>> [root@ovnode01 ~]# rpm -qa libvirt* qemu*
>> libvirt-0.10.2-29.el6_5.12.x86_64
>> libvirt-lock-sanlock-0.10.2-29.el6_5.12.x86_64
>> libvirt-python-0.10.2-29.el6_5.12.x86_64
>> qemu-kvm-tools-0.12.1.2-2.415.el6_5.14.x86_64
>> qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>> libvirt-client-0.10.2-29.el6_5.12.x86_64
>> qemu-img-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>>
>>
>>
>> b) f20 + latest updates host installed as OS and then installed from
>> webadmin in another cluster
>> virt-preview on host it is not enabled so that libvirt/qemu are not ready
>>
>>
>> [root@ovnode02 network-scripts]# vdsClient -s 0 getVdsCaps | grep -i
>> merge
>> liveMerge = 'false'
>>
>> [root@ovnode02 network-scripts]# rpm -qa libvirt* qemu*
>> libvirt-daemon-1.1.3.6-2.fc20.x86_64
>> libvirt-python-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-config-nwfilter-1.1.3.6-2.fc20.x86_64
>> qemu-kvm-1.6.2-10.fc20.x86_64
>> qemu-common-1.6.2-10.fc20.x86_64
>> libvirt-client-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-network-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-nwfilter-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-interface-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-nodedev-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-secret-1.1.3.6-2.fc20.x86_64
>> qemu-system-x86-1.6.2-10.fc20.x86_64
>> libvirt-daemon-kvm-1.1.3.6-2.fc20.x86_64
>> qemu-kvm-tools-1.6.2-10.fc20.x86_64
>> qemu-img-1.6.2-10.fc20.x86_64
>> libvirt-daemon-driver-qemu-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-storage-1.1.3.6-2.fc20.x86_64
>> libvirt-lock-sanlock-1.1.3.6-2.fc20.x86_64
>>
>>
>>
>> c) CentOS 6.6 host + latest updates installed as OS and then installed
>> from webadmin in another cluster
>>
>> [root@ovnode03 ~]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'false'
>>
>> [root@ovnode03 ~]# rpm -qa libvirt* qemu*
>> libvirt-python-0.10.2-46.el6_6.2.x86_64
>> qemu-kvm-rhev-tools-0.12.1.2-2.415.el6_5.14.x86_64
>> qemu-img-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>> libvirt-client-0.10.2-46.el6_6.2.x86_64
>> libvirt-0.10.2-46.el6_6.2.x86_64
>> qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>> libvirt-lock-sanlock-0.10.2-46.el6_6.2.x86_64
>>
>>
>>
>> d) CentOS 7.0 host + latest updates installed as OS and then installed
>> from webadmin in another cluster
>>
>> [root@ovnode04 ~]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'false'
>>
>> [root@ovnode04 ~]# rpm -qa qemu* libvirt*
>> qemu-img-rhev-1.5.3-60.el7_0.2.x86_64
>> libvirt-daemon-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-storage-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-nodedev-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-kvm-1.1.1-29.el7_0.3.x86_64
>> qemu-kvm-tools-rhev-1.5.3-60.el7_0.2.x86_64
>> qemu-kvm-common-rhev-1.5.3-60.el7_0.2.x86_64
>> libvirt-client-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-nwfilter-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-interface-1.1.1-29.el7_0.3.x86_64
>> libvirt-lock-sanlock-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-config-nwfilter-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-network-1.1.1-29.el7_0.3.x86_64
>> qemu-kvm-rhev-1.5.3-60.el7_0.2.x86_64
>> libvirt-python-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-secret-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-qemu-1.1.1-29.el7_0.3.x86_64
>>
>>
>> Thanks in advance,
>> Gianluca
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> was this resolved?
>
In my opinion it was not resolved and what written into release notes
doesn't correspond to features set.
See my test cases and eventually describe other test cases where live merge
is usable
See also other findings about live merge on active layer here:
http://lists.ovirt.org/pipermail/users/2014-November/029450.html
Gianluca
9 years, 11 months
Debug Environment for RHEVM
by Xie, Chao
--_000_EE4D679B9474414187D2E27D8B6890F694E491G08CNEXMBPEKD03g0_
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
SEksDQoNCkkgZm91bmQgdGhlcmUgaXMgYSBkZWJ1ZyBlbnZpcm9ubWVudCBmb3Igb1ZpcnQ6IGh0
dHA6Ly93aWtpLm92aXJ0Lm9yZy9PVmlydF9FbmdpbmVfRGV2ZWxvcG1lbnRfRW52aXJvbm1lbnQN
CklzIGl0IGFsc28gdXNlZnVsIGZvciBSSEVWTSBzb3VyY2UgY29kZT8NCg0KQmVzdCBSZWdhcmRz
LA0KWGllDQo=
--_000_EE4D679B9474414187D2E27D8B6890F694E491G08CNEXMBPEKD03g0_
Content-Type: text/html; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dgb2312">
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:=CB=CE=CC=E5;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"\@=CB=CE=CC=E5";
panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
text-align:justify;
text-justify:inter-ideograph;
font-size:10.5pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
/* Page Definitions */
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"ZH-CN" link=3D"blue" vlink=3D"purple" style=3D"text-justify-t=
rim:punctuation">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">HI, <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I found there is a debug enviro=
nment for oVirt:
<a href=3D"http://wiki.ovirt.org/OVirt_Engine_Development_Environment">http=
://wiki.ovirt.org/OVirt_Engine_Development_Environment</a><o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Is it also useful for RHEVM sou=
rce code?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Best Regards,<o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Xie<o:p></o:p></span></p>
</div>
</body>
</html>
--_000_EE4D679B9474414187D2E27D8B6890F694E491G08CNEXMBPEKD03g0_--
9 years, 11 months
templates and freeipa
by Jim Kinney
Ovirt 3.5 is running well for me and I have freeIPA controlling access to
the user portal. I would like to provide templates of various linux setups
that all have freeipa for user authentication in the VM for my developers
to be able to create a new VM from and then log in using their freeIPA
access and sudo control. I'm wanting to group developers by project and use
freeIPA to set sudo commands as needed (group A get oracle, group B get
postgresql, etc). Wanting to maximize developer ability while minimizing my
clean up time :-) They will be able to delete VMs they create.
It's possible to do a kickstart deploy with freeIPA registration but a
template from that will be a problem as it will have the same keys for all
VMs.
Is there a post-creation scripting process I can attach to in ovirt or
should I look at a default root user and script that personalizes the new
VM?
--
--
James P. Kinney III
Every time you stop a school, you will have to build a jail. What you gain
at one end you lose at the other. It's like feeding a dog on his own tail.
It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
*http://heretothereideas.blogspot.com/
<http://heretothereideas.blogspot.com/>*
9 years, 11 months
Re: [ovirt-users] engine-iso-uploader unexpected behaviour (steve a)
by Steve Atkinson
On Wed, Dec 31, 2014 at 12:47 PM, Yedidyah Bar David <didi(a)redhat.com>
wrote:
> ----- Original Message -----
> > From: "Steve Atkinson" <satkinson(a)telvue.com>
> > To: users(a)ovirt.org
> > Sent: Wednesday, December 31, 2014 7:15:23 PM
> > Subject: [ovirt-users] engine-iso-uploader unexpected behaviour
> >
> > When attempting use the engine-iso-uploader to drop ISOs in my iso
> storage
> > domain I get the following results.
> >
> > Using engine-iso-uploader --iso-domain=[domain] upload [iso] does not
> work
> > because the engine does not have access to our storage network. So it
> > attempts to mount to an address that is not routable. I thought to
> resolve
> > this by adding an interfaces to the Hosted Engine, only to find that I
> > cannot modify the Engine's VM config from the GUI. I receive the error:
> > Cannot add Interface. This VM is not managed by the engine. Actually, I
> get
> > that error whenever I attempt to modify anything about the engine. Maybe
> > this is expected behavior? I can't find any bestpractices regarding
> Hosted
> > Engine administration.
> >
> > Alternatively, using engine-iso-uploader --nfs-server=[path] upload [iso]
> > --verbose returns the following error:
> >
> > ERROR: local variable 'domain_type' referenced before assignment
> > INFO: Use the -h option to see usage.
> > DEBUG: Configuration:
> > DEBUG: command: upload
> > DEBUG: Traceback (most recent call last):
> > DEBUG: File "/usr/bin/engine-iso-uploader", line 1440, in <module>
> > DEBUG: isoup = ISOUploader(conf)
> > DEBUG: File "/usr/bin/engine-iso-uploader", line 455, in __init__
> > DEBUG: self.upload_to_storage_domain()
> > DEBUG: File "/usr/bin/engine-iso-uploader", line 1089, in
> > upload_to_storage_domain
> > DEBUG: elif domain_type in ('localfs', ):
> > DEBUG: UnboundLocalError: local variable 'domain_type' referenced before
> > assignment
>
> Do you run it from the engine's machine? The host? Elsewhere?
> Where is the iso domain?
>
> This sounds to me like a bug, but I didn't check the sources yet.
> Please open one. Thanks.
>
> That said, you can simply copy your iso file directly to the correct
> directory inside the iso domain, which is:
>
> /path-to-iso-domain/SOME-UUID/images/11111111-1111-1111-1111-111111111111/
>
> Make sure it's readable to vdsm:kvm (36:36).
>
> Best,
> --
> Didi
>
I'm running the command from the Engine itself, it seems to be the only box
that has this command available. The ISO domain utilizes the same root
share as the DATA and EXPORT domains which seem to work fine. The structure
looks something like:
server:/nfs-share/ovirt-store/iso-store/UUIDblahblah/images/
server:/nfs-share/ovirt-store/export-store/path/path/path
server:/nfs-share/ovirt-store/data-store/UUIDblahblah/images/
Each of storage-domain was created through the Engine. Perms for each
below, which persists all the way down the tree:
drwxr-xr-x. 3 vdsm kvm 4 Nov 14 19:02 data-store
drwxr-xr-x. 3 vdsm kvm 4 Nov 14 19:04 export-store
drwxr-xr-x. 3 vdsm kvm 4 Nov 14 18:18 hosted-engine
drwxr-xr-x. 3 vdsm kvm 4 Nov 14 19:04 iso-store
If I attempt to mount any of them via NFS from our management network they
work just fine. (moved around, read/write operations)
Copied the ISO I needed directly to it and changed the perms/ownership by
hand which seems to have worked as a short term solution.
I can see why the --iso-domain argument has issues as it is trying to use
the our storage network, which isn't routable from the Engine as it only
has the one network interface. Although that does seem like an oversight.
Seems like this transfer should pass through the SPM and not try to
directly mount the NFS share if the --iso-domain flag is used.
Thanks for the quick response.
9 years, 11 months
Problem wiht ovirt-node
by rino
I installed ovirt node with hosted-engine using pre-release ovirt-node 3.5
iso.
When i shutdown the ovirt-node from menu everything down ok, ( i register
it as node in vm ). When i turn on the server next day, ovirt-node come up
with no vm was installed ( i use other server as storage with nfs). Nfs is
working fine i see all resource if i do showmount -e nfs.server.
But if i try to do hosted-engine --vm-status or any command with
hosted-engine it suggest me to do a new deploy, but i have all enviroment
configured en my engine.
I need to configure two ovirt-node, and then move it to datacenter, but
after shutdown vm engine is not seen by ovirt-node.
Regards
--
---
Rondan Rino
Certificado en LPIC-2 <https://cs.lpi.org/caf/Xamman/certification>
LPI ID:LPI000209832
Verification Code:gbblvwyfxu
Red Hat Certified Engineer -- RHCE -- RHCVA
<https://www.redhat.com/wapps/training/certification/verify.html?certNumbe...>
Blog:http://www.itrestauracion.com.ar
Cv: http://cv.rinorondan.com.ar <http://www.rinorondan.com.ar/>
http://counter.li.org Linux User -> #517918
Viva La Santa Federacion!!
Mueran Los Salvages Unitarios!!
^^^Transcripcion de la epoca ^^^
9 years, 11 months
Autostart vm's at host boot on local storage
by Brent Hartzell
--_CC286E1A-A3A3-481A-AB16-0EE929F3B4A9_
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="Windows-1252"
Can this be done? We hit a road block with gluster and will be using local =
storage while testing gluster. Only problem, if a host reboots, the vm's on=
that host do not. Is there a way to have ovirt/libvirt start all vm's resi=
ding on the local storage?=
--_CC286E1A-A3A3-481A-AB16-0EE929F3B4A9_
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset="Windows-1252"
<HTML><HEAD>
<META content=3D"text/html; charset=3Dwindows-1252" http-equiv=3DContent-Ty=
pe></HEAD>
<BODY>
<DIV>
<DIV style=3D"FONT-SIZE: 11pt; FONT-FAMILY: Calibri,sans-serif">Can this be=
done? We hit a road block with gluster and will be using local storage whi=
le testing gluster. Only problem, if a host reboots, the vm's on that host =
do not. Is there a way to have ovirt/libvirt start all vm's residing on the=
local storage?</DIV></DIV></BODY></HTML>=
--_CC286E1A-A3A3-481A-AB16-0EE929F3B4A9_--
9 years, 11 months