[Users] Community feedback on the new UI-plugin Framework
by Oved Ourfalli
Hey all,
We had an oVirt workshop this week, which included a few sessions about the new oVirt UI Plugin framework, including a Hackaton and a BOF session.
I've gathered some feedback we got from the different participants about the framework, and what they would like to see in the future of it.
1. People liked the fact that it is a simple framework, allowing you to do nice extensions rapidly, without the need to know complex technologies (simple javascript knowledge is all you need to know).
2. People want the framework to provide tools for adding UI components (main/sub tabs, dialogs, etc.) that aren't URL based, but are based on components we currently have in oVirt, such as grids, key-value pairs (such as the general sub-tab), action buttons in these custom tabs and etc.
The main reason for that is to easily develop a plugin with an oVirt-like look-and-feel. Chris Morrissey from Netapp showed a very nice plugin he wrote that did have an oVirt-like look-and-feel, but it wasn't easy.... and it required him to to develop something specific for the plugin to interact with, in the 3rd party application (something similar to the work we did in the oVirt-Foreman UI-plugin).
3. Support adding tasks to the system - plugins may trigger asynchronous tasks behind the scene, both oVirt and external ones. oVirt tasks and their progress will be reflected in the tasks management view, but if the flows contain external tasks as well, then it would be hard to track through the oVirt UI.
4. Plugin management
* The ability to see what plugins are installed... install new plugins and remove existing ones.
* Change the plugin configuration through webadmin
* Distinguish between public plugin configuration entries (entries the user to change), to private ones (entries it can't).
I guess that this point will be relevant for engine-plugins as well (once support for such plugins will be available) so we should consider providing a similar solution for both. Also, Chris pointed out that it should be taken into consideration as well when working on supporting HA-oVirt-engine, as plugins are vital part of the oVirt environment.
If you find the feedback above true, or you have other comments that weren't mentioned here, please share it with us!
Thank you,
Oved
P.S:
I guess the slides will be uploaded sometime next week (I guess someone would have asked it soon... so now you have your answer :-) )
11 years, 9 months
[Users] UI Plugin issue when switching main tabs
by René Koch
Hi,
I'm working on an UI plugin to integrate Nagios/Icinga into oVirt Engine and made some progress, but have an issue when switching the main tabs.
I use VirtualMachineSelectionChange to create URL with name of vm (and HostSelectionChange for hosts).
Name is used in my backend code (Perl) for fetching monitoring status.
Here's the code of VirtualMachineSelectionChange:
VirtualMachineSelectionChange: function() {
var vmName = arguments[0].name;
alert(vmName);
// Reload VM Sub Tab
api.setTabContentUrl('vms-monitoring', conf.url + '?subtab=vms&name=' + encodeURIComponent(vmName));
}
Everything works fine as long as I stay in Virtual Machine main tab.
When switching to e.g. Disks and back to Virtual Machines again the JavaScript code of start.html isn't processed anymore (or cached (?) as the my generated URL with last vm name will still be sent back to my Perl backend) - added alert() to test this.
oVirt Engine version: ovirt-engine-3.2.0-1.20130118.gitd102d6f.fc18.noarch
Full code of start.hml: http://pastebin.com/iEY6dA6F
Thanks a lot for your help,
René
11 years, 9 months
Re: [Users] adding multiple interfaces with different networks
by Kevin Maziere Aubry
Hi
I have exactly the same issue.
This mean that a 1Gb (at least) interface must be dedicated to the
ovirtmgmt interface, which is not a good idea.
Kevin
2012/12/26 Jonathan Horne <jhorne(a)skopos.us>
> 2012-12-26 16:48:56,416 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--0.0.0.0-8009-8) [2d2d6184] Failed in SetupNetworksVDS method
> 2012-12-26 16:48:56,417 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--0.0.0.0-8009-8) [2d2d6184] Error code ERR_BAD_BONDING and error
> message VDSGenericException: VDSErrorException: Failed to SetupNetworksVDS,
> error = bonding 'bond2' is already member of network 'ovirtmgmt'
> 2012-12-26 16:48:56,418 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
> (ajp--0.0.0.0-8009-8) [2d2d6184]
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to SetupNetworksVDS, error =
> bonding 'bond2' is already member of network 'ovirtmgmt'
> 2012-12-26 16:48:56,418 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--0.0.0.0-8009-8)
> [2d2d6184] Command SetupNetworksVDS execution failed. Exception:
> RuntimeException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to SetupNetworksVDS, error =
> bonding 'bond2' is already member of network 'ovirtmgmt'
>
>
> so im guessing… i can't have my vlan3204 or vlan3202 share an interface
> with ovirtmgmt?
>
> [root@d0lppc021 ~]# rpm -qa|grep ovirt
> ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
> ovirt-engine-cli-3.1.0.7-1.el6.noarch
> ovirt-image-uploader-3.1.0-16.el6.noarch
> ovirt-engine-backend-3.1.0-3.19.el6.noarch
> ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
> ovirt-iso-uploader-3.1.0-16.el6.noarch
> ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
> ovirt-engine-config-3.1.0-3.19.el6.noarch
> ovirt-log-collector-3.1.0-16.el6.noarch
> ovirt-engine-restapi-3.1.0-3.19.el6.noarch
> ovirt-engine-userportal-3.1.0-3.19.el6.noarch
> ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
> ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
> ovirt-engine-3.1.0-3.19.el6.noarch
> ovirt-engine-jbossas711-1-0.x86_64
> ovirt-engine-setup-3.1.0-3.19.el6.noarch
> ovirt-engine-sdk-3.1.0.5-1.el6.noarch
>
>
>
> ------------------------------
> This is a PRIVATE message. If you are not the intended recipient, please
> delete without copying and kindly advise us by e-mail of the mistake in
> delivery. NOTE: Regardless of content, this e-mail shall not operate to
> bind SKOPOS to any order or other contract unless pursuant to explicit
> written agreement or government initiative expressly permitting the use of
> e-mail for such purpose.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
--
Kevin Mazière
Responsable Infrastructure
Alter Way – Hosting
1 rue Royal - 227 Bureaux de la Colline
92213 Saint-Cloud Cedex
Tél : +33 (0)1 41 16 38 41
Mob : +33 (0)7 62 55 57 05
http://www.alterway.fr
11 years, 9 months
[Users] ovirt 3.2 migrations failing
by Jonathan Horne
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B5FCAUSP01DAG0201co_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
i just built up 2 nodes and a manager on 3.2 dreyou packages, and now that =
i have a VM up and installed with rhev agent, the VM is unable to migrate. =
the failure is pretty much immediate.
i don't know where to begin troubleshooting this, can someone help me get g=
oing in the right direction? just let me know what logs are appropriate an=
d i will post them up.
thanks,
jonathan
________________________________
This is a PRIVATE message. If you are not the intended recipient, please de=
lete without copying and kindly advise us by e-mail of the mistake in deliv=
ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO=
POS to any order or other contract unless pursuant to explicit written agre=
ement or government initiative expressly permitting the use of e-mail for s=
uch purpose.
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B5FCAUSP01DAG0201co_
Content-Type: text/html; charset="us-ascii"
Content-ID: <800473489CC37140A49FAFFF2819AE02(a)collaborationhost.net>
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font=
-family:Calibri,sans-serif">
<div>
<div>
<div>i just built up 2 nodes and a manager on 3.2 dreyou packages, and now =
that i have a VM up and installed with rhev agent, the VM is unable to migr=
ate. the failure is pretty much immediate.</div>
<div><br>
</div>
<div>i don't know where to begin troubleshooting this, can someone help me =
get going in the right direction? just let me know what logs are appr=
opriate and i will post them up.</div>
<div><br>
</div>
<div>thanks,</div>
<div>jonathan</div>
<div>
<div></div>
</div>
</div>
</div>
<br>
<hr>
<font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I=
f you are not the intended recipient, please delete without copying and kin=
dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con=
tent, this e-mail shall not operate to
bind SKOPOS to any order or other contract unless pursuant to explicit wri=
tten agreement or government initiative expressly permitting the use of e-m=
ail for such purpose.</font>
</body>
</html>
--_000_9BE6F493F83A594DA60C45E6A09DC5AC0170B5FCAUSP01DAG0201co_--
11 years, 9 months
[Users] Problem with libvirt
by Juan Jose
Hello everybody,
I have installed and configured oVirt 3.1 engine in a Fedora 17 with a
Fedora 17 node connected. Ihave defined a NFS domain for my VM and another
for ISOs. I try to start a Fedora 17 Server with Run once and the machi
start without problems, after that I preceed with the installation in its
wirtual disk but when I arrive to define partitions in the virtual disk the
machine is freeze and I start to receive engine errors and the default data
center go in non responsive status.
I can see this messages in /var/log/ovirt-engine/engine.log, which I attach
to this message:
....
2013-01-31 11:43:23,957 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] Recieved a Device without an address
when processing VM da09284e-3189-428b-a879-6201f7a5ca87 devices, skipping
device: {shared=false, volumeID=1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
index=0, propagateErrors=off, format=raw, type=disk, truesize=8589938688,
reqsize=0, bootOrder=2, iface=virtio,
volumeChain=[Ljava.lang.Object;@1ea2bdf9,
imageID=49e21bfc-384b-4bea-8013-f02b1be137c7,
domainID=57d184a0-908b-49b5-926f-cd413b9e6526, specParams={},
optional=false, needExtend=false,
path=/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/49e21bfc-384b-4bea-8013-f02b1be137c7/1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
device=disk, poolID=d6e7e8b8-49c7-11e2-a261-000a5e429f63, readonly=false,
deviceId=49e21bfc-384b-4bea-8013-f02b1be137c7, apparentsize=8589934592}.
2013-01-31 11:43:23,960 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=4dca1c64-dbf8-4e31-b359-82cf0e259f65,Device=qxl,Type=video,BootOrder=0,SpecParams={vram=65536},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:23,961 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:23,962 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:23,963 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=614bc0b4-64d8-4058-8bf8-83db62617e00,Device=bridge,Type=interface,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:23,964 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=49e21bfc-384b-4bea-8013-f02b1be137c7,Device=disk,Type=disk,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:26,063 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM Fedora17
da09284e-3189-428b-a879-6201f7a5ca87 moved from WaitForLaunch --> PoweringUp
2013-01-31 11:43:26,064 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(QuartzScheduler_Worker-24) [7d021319] START, FullListVdsCommand(vdsId =
7d3491e8-49ce-11e2-8b2e-000a5e429f63, vds=null,
vmIds=[da09284e-3189-428b-a879-6201f7a5ca87]), log id: f68f564
2013-01-31 11:43:26,086 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(QuartzScheduler_Worker-24) [7d021319] FINISH, FullListVdsCommand, return:
[Lorg.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct;@33c68023, log id:
f68f564
2013-01-31 11:43:26,091 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:26,092 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:31,721 INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (ajp--0.0.0.0-8009-11)
[28d7a789] Running command: SetVmTicketCommand internal: false. Entities
affected : ID: da09284e-3189-428b-a879-6201f7a5ca87 Type: VM
2013-01-31 11:43:31,724 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--0.0.0.0-8009-11) [28d7a789] START, SetVmTicketVDSCommand(vdsId =
7d3491e8-49ce-11e2-8b2e-000a5e429f63,
vmId=da09284e-3189-428b-a879-6201f7a5ca87, ticket=qmcnuOICblb3,
validTime=120,m userName=admin@internal,
userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id: 6eaacb95
2013-01-31 11:43:31,758 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--0.0.0.0-8009-11) [28d7a789] FINISH, SetVmTicketVDSCommand, log id:
6eaacb95
...
2013-01-31 11:49:13,392 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-81) [164eaa47] domain
57d184a0-908b-49b5-926f-cd413b9e6526 in problem. vds: host1
2013-01-31 11:49:54,121 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-73) [73213e4f] vds::refreshVdsStats Failed
getVdsStats, vds = 7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, error =
VDSNetworkException: VDSNetworkException:
2013-01-31 11:49:54,172 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-73) [73213e4f]
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, VDS Network Error, continuing.
VDSNetworkException:
....
In the events windows after VM freezing, I have below events:
2013-Jan-31, 11:50:52 Failed to elect Host as Storage Pool Manager for
Data Center Default. Setting status to Non-Operational.
2013-Jan-31, 11:50:52 VM Fedora17 was set to the Unknown status.
2013-Jan-31, 11:50:52 Host host1 is non-responsive.
2013-Jan-31, 11:49:55 Invalid status on Data Center Default. Setting Data
Center status to Non-Responsive (On host host1, Error: Network error during
communication with the Host.).
2013-Jan-31, 11:44:25 VM Fedora17 started on Host host1
Any suggest about the problem?. It seem a libvirt problem, I will continue
investigating.
Many thanks in avanced,
Juanjo.
11 years, 9 months
[Users] Glusterfs HA doubts
by Adrian Gibanel
In oVirt 3.1 GlusterFS support was added. It was an easy way to replicate your virtual machine storage without too much hassle.
There are two main howtos:
* http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-usi... (Robert Middleswarth)
* http://blog.jebpages.com/archives/ovirt-3-1-glusterized/ (Jason Brooks).
1) What about performance?
I've done some tests with rsync backups (even using the suggested --inplace rsync switch) that implies small files. These backups were done into local mounted glusterfs volumes. Backups instead of lasting about 2 hours they lasted like 15 hours long.
Is there maybe something that only happens with small files and with big files performance is ok?
2) How to know the current status?
In DRBD you know it checking a proc file if I remember it well. I remember too that GlusterFS doesn't have an equivalent thing and there's no evident way to know if all the files are synced.
If you have tried it how do you know if both sets of virtual disks images are synced?
3) Mount dns resolution
If you check Jason Brooks howto you will see that it uses a hostname for refering to nfs mount. If you want to perform HA you need your storage to be mounted and if the server1 host is down it doesn't help that the nfs mount point associated to the storage is server1:/vms/ and not server2:/vms/. Checking Middleswarth howto I think that he does the same thing.
Let's explain a bit more so that understand. My example setup is the one where you have two host machines where you run a set of virtual machines on one and the other one doesn't have any virtual machine running. Where is the virtual machines storage located? It's located at the glusterfs volume.
So the first one of the machines mounts the glusterfs volume as nfs (It's an example).
If it uses its own hostname for the nfs mount then if itself goes down the second host isn't going to mount it when it's restarted in the HA mode.
So the first one of the machines mounts the glusterfs volume as nfs (It's an example).
If it uses the second host hostname for the nfs mount then if the second host goes down the virtual machine cannot access its virtual disks.
A workaround for this situation which I have thought is to use /etc/hosts on both machines so that:
whatever.domain.com
resolves in both hosts to the host self's ip.
I think that glusterfs has a way of mounting their share through "-t glusterfs" that somehow can ignore these hostnames problems but I haven't read it too much about it so I'm not too sure.
4) So my doubts basically are:
* Has anyone setup a two host glusterfs HA oVirt cluster where storage is shared by a replicated Glusterfs volume that is shared and stored by both of them?
* Does HA work when one of the host goes down?
* Or does it complain about hostname as I suspect?
* Any tips to ensure the best performance?
Thank you.
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
11 years, 9 months
[Users] 3.2 beta and f18 host on dell R815 problem
by Gianluca Cecchi
during install of server I get this
Host installation failed. Fix installation issues and try to Re-Install
In deploy log
2013-01-31 12:17:30 DEBUG
otopi.plugins.ovirt_host_deploy.vdsm.hardware
hardware._isVirtualizationEnabled:144 virtualization support
GenuineIntel (cpu: False, bios: True)
2013-01-31 12:17:30 DEBUG otopi.context context._executeMethod:127
method exception
Traceback (most recent call last):
File "/tmp/ovirt-SfEARpd3h4/pythonlib/otopi/context.py", line 117,
in _executeMethod
method['method']()
File "/tmp/ovirt-SfEARpd3h4/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py",
line 170, in _validate_virtualization
_('Hardware does not support virtualization')
RuntimeError: Hardware does not support virtualization
2013-01-31 12:17:30 ERROR otopi.context context._executeMethod:136
Failed to execute stage 'Setup validation': Hardware does not support
virtualization
note the GenuineIntel above... ??
But actually it is AMD
[root@f18ovn03 ~]# lsmod|grep kvm
kvm_amd 59623 0
kvm 431794 1 kvm_amd
cat /proc/cpuinfo
...
processor : 47
vendor_id : AuthenticAMD
cpu family : 16
model : 9
model name : AMD Opteron(tm) Processor 6174
stepping : 1
microcode : 0x10000d9
cpu MHz : 800.000
cache size : 512 KB
physical id : 3
siblings : 12
core id : 5
cpu cores : 12
apicid : 59
initial apicid : 59
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl
nonstop_tsc extd_apicid amd_dcm pni monitor cx16 popcnt lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt nodeid_msr hw_pstate npt lbrv svm_lock nrip_save
pausefilter
bogomips : 4400.44
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
Any hint?
Gianluca
11 years, 9 months
[Users] 3.2 beta: Amd Opteron 6174 wrongly detected as 8 socket
by Gianluca Cecchi
Hello,
after deploy of a node that has 4 sockets with 12cores each, it is
wrongly detected in web admin gui.
See:
https://docs.google.com/file/d/0BwoPbcrMv8mvdjdYNjVfT2NWY0U/edit
It says 8 sockets each with 6 cores....
Output of
# virsh capabilities
here:
https://docs.google.com/file/d/0BwoPbcrMv8mveG5OaVBZN1VENlU/edit
output of cpuid here:
https://docs.google.com/file/d/0BwoPbcrMv8mvUFFRYkZEX0lmRG8/edit
also run this
[root@f18ovn03 ~]# vdsClient -s 0 getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:f9baf5a8f6c3'}], 'FC': []}
ISCSIInitiatorName = iqn.1994-05.com.redhat:f9baf5a8f6c3
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask':
'', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr':
'', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '192.168.1.102', 'cfg': {'DOMAIN':
'localdomain.local', 'UUID': '60d40d4a-d8ab-4f5b-bd48-2e807df36be4',
'DNS3': '82.113.193.3', 'IPADDR0': '192.168.1.102', 'DNS1':
'192.168.1.103', 'PREFIX0': '24', 'DEFROUTE': 'yes',
'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no',
'BOOTPROTO': 'none', 'GATEWAY0': '192.168.1.1', 'DNS2': '8.8.8.8',
'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes', 'IPV6INIT':
'no'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off',
'ports': ['em1']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 48
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,amd_dcm,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,nodeid_msr,hw_pstate,npt,lbrv,svm_lock,nrip_save,pausefilter,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2
cpuModel = AMD Opteron(tm) Processor 6174
cpuSockets = 8
cpuSpeed = 800.000
cpuThreads = 48
emulatedMachines = ['pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0',
'pc-0.15', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10',
'isapc', 'pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0', 'pc-0.15',
'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
guestOverhead = 65
hooks = {}
kvmEnabled = true
lastClient = 192.168.1.111
lastClientIface = ovirtmgmt
management_ip =
memSize = 64418
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.1.102', 'cfg': {'DOMAIN': 'localdomain.local', 'UUID':
'60d40d4a-d8ab-4f5b-bd48-2e807df36be4', 'DNS3': '82.113.193.3',
'IPADDR0': '192.168.1.102', 'DNS1': '192.168.1.103', 'PREFIX0': '24',
'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'GATEWAY0': '192.168.1.1',
'DNS2': '8.8.8.8', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes', 'IPV6INIT': 'no'}, 'mtu': '1500', 'netmask': '255.255.255.0',
'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports':
['em1']}}
nics = {'em4': {'addr': '', 'cfg': {'PEERROUTES': 'yes', 'UUID':
'bed68125-4345-4995-ba49-a6e5580c58dd', 'NAME': 'em4', 'TYPE':
'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE': 'yes', 'PEERDNS':
'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR': '00:25:64:F9:76:82',
'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL':
'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE': 'yes', 'ONBOOT':
'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:25:64:f9:76:82', 'speed': 0}, 'em1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'DOMAIN': 'localdomain.local', 'DEVICE':
'em1', 'UUID': '60d40d4a-d8ab-4f5b-bd48-2e807df36be4', 'DNS3':
'82.113.193.3', 'IPADDR0': '192.168.1.102', 'DNS1': '192.168.1.103',
'PREFIX0': '24', 'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL': 'no',
'NM_CONTROLLED': 'no', 'GATEWAY0': '192.168.1.1', 'DNS2': '8.8.8.8',
'HWADDR': '00:25:64:f9:76:7c', 'ONBOOT': 'yes', 'IPV6INIT': 'no'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:25:64:f9:76:7c', 'speed':
1000}, 'em3': {'addr': '', 'cfg': {'PEERROUTES': 'yes', 'UUID':
'2984885c-fbd8-4ad1-a393-00f0a205ae79', 'NAME': 'em3', 'TYPE':
'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE': 'yes', 'PEERDNS':
'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR': '00:25:64:F9:76:80',
'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL':
'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE': 'yes', 'ONBOOT':
'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:25:64:f9:76:80', 'speed': 0}, 'em2': {'addr': '', 'cfg':
{'PEERROUTES': 'yes', 'UUID': 'ebd889bc-57ae-4ee9-8db2-4595309ee81c',
'NAME': 'em2', 'TYPE': 'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE':
'yes', 'PEERDNS': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR':
'00:25:64:F9:76:7E', 'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes',
'IPV6_FAILURE_FATAL': 'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE':
'yes', 'ONBOOT': 'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask':
'', 'hwaddr': '00:25:64:f9:76:7e', 'speed': 0}}
operatingSystem = {'release': '1', 'version': '18', 'name': 'Fedora'}
packages2 = {'kernel': {'release': '204.fc18.x86_64', 'buildtime':
1358955869.0, 'version': '3.7.4'}, 'spice-server': {'release':
'1.fc18', 'buildtime': 1356035501, 'version': '0.12.2'}, 'vdsm':
{'release': '6.fc18', 'buildtime': 1359564723, 'version': '4.10.3'},
'qemu-kvm': {'release': '2.fc18', 'buildtime': 1358351894, 'version':
'1.2.2'}, 'libvirt': {'release': '3.fc18', 'buildtime': 1355788803,
'version': '0.10.2.2'}, 'qemu-img': {'release': '2.fc18', 'buildtime':
1358351894, 'version': '1.2.2'}, 'mom': {'release': '1.fc18',
'buildtime': 1349470214, 'version': '0.3.0'}}
reservedMem = 321
software_revision = 6
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = 4C4C4544-0056-5910-8047-CAC04F4E344A
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']
(this time the host is the inended one... ;-)
Gianluca
11 years, 9 months
[Users] Fwd: Re: 3.2 beta and f18 host on dell R815 problem
by Gianluca Cecchi
Find attach
Putty.log=output of cpuid command
Engine.log after patching and retry.
No file under host-deploy, it gives syntax error
>> > ---------- Forwarded message ----------
>> > From: Alon Bar-Lev <alonbl(a)redhat.com>
>> > Date: Thu, Jan 31, 2013 at 1:48 PM
>> > Subject: Re: [Users] 3.2 beta and f18 host on dell R815 problem
>> > To: Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
>> >
>> >
>> > Sorry, had error.
>> >
>> > ----- Original Message -----
>> >> From: "Alon Bar-Lev" <alonbl(a)redhat.com>
>> >> To: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>
>> >> Sent: Thursday, January 31, 2013 2:40:55 PM
>> >> Subject: Re: [Users] 3.2 beta and f18 host on dell R815 problem
>> >>
>> >> Hi!
>> >>
>> >> Can you please try to replace the attach file at:
>> >>
/usr/share/ovirt-host-deploy/plugins/ovirt-host-deploy/vdsm/hardware.py
>> >>
>> >> Retry and send me the log?
>> >>
>> >> I added some more debug to see what went wrong.
>> >>
>> >> Thanks!
>> >> Alon
>> >>
>> >>
>> >> ----- Original Message -----
>> >> > From: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>
>> >> > To: "users" <users(a)ovirt.org>
>> >> > Sent: Thursday, January 31, 2013 1:57:38 PM
>> >> > Subject: Re: [Users] 3.2 beta and f18 host on dell R815 problem
>> >> >
>> >> > Output of command
>> >> > # virsh capabilities
>> >> > on this host
>> >> >
>> >> > https://docs.google.com/file/d/0BwoPbcrMv8mveG5OaVBZN1VENlU/edit
>> >> > _______________________________________________
>> >> > Users mailing list
>> >> > Users(a)ovirt.org
>> >> > http://lists.ovirt.org/mailman/listinfo/users
>> >> >
>> >>
>
>
11 years, 9 months