[Users] VM Priority for Run/Migration queue set failed
by Alexandru Vladulescu
This is a multi-part message in MIME format.
--------------040205070708090903030405
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi All,
I have the following problem:
When I try to edit the configuration of a VM, VM being shut down, I go
to High Availability section, select the check box for Highly Available,
and try to increase afterwards the queue priority from Low to Medium or
High.
Despite the fact that I check the box for the Medium or High, after I
hit OK, and return to general information about VM, I see that Priority
remains unchanged to Low.
This is not what happens with the Highly Available check box, as
changing this value updates the general info tab when the VM is being
clicked.
Might this be a bug ?
I am using CentOS 6.3 with dreyou's repo, and the rpm packs I have
installed on the node controller are:
/ovirt-engine-setup-3.1.0-3.19.el6.noarch//
//ovirt-engine-config-3.1.0-3.19.el6.noarch//
//ovirt-engine-jbossas711-1-0.x86_64//
//ovirt-log-collector-3.1.0-16.el6.noarch//
//ovirt-iso-uploader-3.1.0-16.el6.noarch//
//ovirt-engine-backend-3.1.0-3.19.el6.noarch//
//ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch//
//ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch//
//ovirt-engine-genericapi-3.1.0-3.19.el6.noarch//
//ovirt-engine-tools-common-3.1.0-3.19.el6.noarch//
//ovirt-engine-3.1.0-3.19.el6.noarch//
//ovirt-engine-sdk-3.1.0.5-1.el6.noarch//
//ovirt-image-uploader-3.1.0-16.el6.noarch//
//ovirt-engine-userportal-3.1.0-3.19.el6.noarch//
//ovirt-engine-restapi-3.1.0-3.19.el6.noarch//
//ovirt-engine-notification-service-3.1.0-3.19.el6.noarch//
//ovirt-engine-cli-3.1.0.7-1.el6.noarch//
/
Also, this is the latest log from engine.log when I run the actions
described in the upper part.
/
2013-01-10 13:36:51,816 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: isQuotaDefault
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,820 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: isQuotaDefault
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,822 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: managedDeviceMap
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,825 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property: managedDeviceMap
for class org.ovirt.engine.core.common.businessentities.VmStatic
2013-01-10 13:36:51,850 INFO [org.ovirt.engine.core.bll.UpdateVmCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] Running command: UpdateVmCommand
internal: false. Entities affected : ID:
96e6705a-030c-411a-b365-ad6ff3fcfb56 Type: VM
2013-01-10 13:36:51,868 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START, IsValidVDSCommand(storagePoolId
= b6c128ae-5987-11e2-964c-001e8c47d368, ignoreFailoverLimit = false,
compatabilityVersion = null), log id: 35d20229
2013-01-10 13:36:51,873 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, IsValidVDSCommand, return:
true, log id: 35d20229
2013-01-10 13:36:51,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START,
UpdateVMVDSCommand(storagePoolId = b6c128ae-5987-11e2-964c-001e8c47d368,
ignoreFailoverLimit = false, compatabilityVersion = null,
storageDomainId = 00000000-0000-0000-0000-000000000000,
infoDictionary.size = 1), log id: 1e0c9f16
2013-01-10 13:36:51,953 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, UpdateVMVDSCommand, log id:
1e0c9f16/
If anyone can give a clue about this, would be much appreciated.
Thanks
Alex.
--------------040205070708090903030405
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<br>
Hi All,<br>
<br>
I have the following problem:<br>
<br>
<br>
When I try to edit the configuration of a VM, VM being shut down, I
go to High Availability section, select the check box for
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
Highly Available, and try to increase afterwards the queue priority
from Low to Medium or High.<br>
<br>
Despite the fact that I check the box for the Medium or High, after
I hit OK, and return to general information about VM, I see that
Priority remains unchanged to Low.<br>
<br>
This is not what happens with the Highly Available check box, as
changing this value updates the general info tab when the VM is
being clicked. <br>
<br>
Might this be a bug ?<br>
<br>
<br>
I am using CentOS 6.3 with dreyou's repo, and the rpm packs I have
installed on the node controller are:<br>
<br>
<small><i>ovirt-engine-setup-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-config-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-jbossas711-1-0.x86_64</i><i><br>
</i><i>ovirt-log-collector-3.1.0-16.el6.noarch</i><i><br>
</i><i>ovirt-iso-uploader-3.1.0-16.el6.noarch</i><i><br>
</i><i>ovirt-engine-backend-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-genericapi-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-tools-common-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-sdk-3.1.0.5-1.el6.noarch</i><i><br>
</i><i>ovirt-image-uploader-3.1.0-16.el6.noarch</i><i><br>
</i><i>ovirt-engine-userportal-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-restapi-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-notification-service-3.1.0-3.19.el6.noarch</i><i><br>
</i><i>ovirt-engine-cli-3.1.0.7-1.el6.noarch</i><i><br>
</i></small><br>
<br>
Also, this is the latest log from engine.log when I run the actions
described in the upper part.<br>
<br>
<i><small><br>
2013-01-10 13:36:51,816 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
isQuotaDefault for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,820 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
isQuotaDefault for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,822 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
managedDeviceMap for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,825 WARN
[org.ovirt.engine.core.compat.backendcompat.PropertyInfo]
(http--0.0.0.0-8443-2) Unable to get value of property:
managedDeviceMap for class
org.ovirt.engine.core.common.businessentities.VmStatic<br>
2013-01-10 13:36:51,850 INFO
[org.ovirt.engine.core.bll.UpdateVmCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] Running command:
UpdateVmCommand internal: false. Entities affected : ID:
96e6705a-030c-411a-b365-ad6ff3fcfb56 Type: VM<br>
2013-01-10 13:36:51,868 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START,
IsValidVDSCommand(storagePoolId =
b6c128ae-5987-11e2-964c-001e8c47d368, ignoreFailoverLimit =
false, compatabilityVersion = null), log id: 35d20229<br>
2013-01-10 13:36:51,873 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsValidVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, IsValidVDSCommand,
return: true, log id: 35d20229<br>
2013-01-10 13:36:51,931 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] START,
UpdateVMVDSCommand(storagePoolId =
b6c128ae-5987-11e2-964c-001e8c47d368, ignoreFailoverLimit =
false, compatabilityVersion = null, storageDomainId =
00000000-0000-0000-0000-000000000000, infoDictionary.size = 1),
log id: 1e0c9f16<br>
2013-01-10 13:36:51,953 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UpdateVMVDSCommand]
(http--0.0.0.0-8443-2) [52c2cdc2] FINISH, UpdateVMVDSCommand,
log id: 1e0c9f16</small></i><br>
<br>
<br>
If anyone can give a clue about this, would be much appreciated.<br>
<br>
<br>
Thanks<br>
Alex.<br>
<br>
</body>
</html>
--------------040205070708090903030405--
11 years, 11 months
Re: [Users] oVirt 3.1 - VM Migration Issue
by Roy Golan
On 01/03/2013 05:07 PM, Tom Brown wrote:
>
>> interesting, please search for migrationCreate command on desination host and search for ERROR afterwords, what do you see?
>>
>> ----- Original Message -----
>>> From: "Tom Brown" <tom(a)ng23.net>
>>> To: users(a)ovirt.org
>>> Sent: Thursday, January 3, 2013 4:12:05 PM
>>> Subject: [Users] oVirt 3.1 - VM Migration Issue
>>>
>>>
>>> Hi
>>>
>>> I seem to have an issue with a single VM and migration, other VM's
>>> can migrate OK - When migrating from the GUI it appears to just hang
>>> but in the engine.log i see the following
>>>
>>> 2013-01-03 14:03:10,359 INFO [org.ovirt.engine.core.bll.VdsSelector]
>>> (ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
>>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>>> 2013-01-03 14:03:10,411 INFO
>>> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>> (pool-3-thread-48) [4d32917d] Running command:
>>> MigrateVmToServerCommand internal: false. Entities affected : ID:
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
>>> 2013-01-03 14:03:10,413 INFO [org.ovirt.engine.core.bll.VdsSelector]
>>> (pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
>>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>>> 2013-01-03 14:03:11,028 INFO
>>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>> (pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>>> 5011789b
>>> 2013-01-03 14:03:11,030 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
>>> (vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
>>> srcHost=10.192.42.196, dstHost=10.192.42.165:54321, method=online
>>> 2013-01-03 14:03:11,031 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>>> 7cd53864
>>> 2013-01-03 14:03:11,041 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
>>> id: 7cd53864
>>> 2013-01-03 14:03:11,086 INFO
>>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>> (pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
>>> MigratingFrom, log id: 5011789b
>>> 2013-01-03 14:03:11,606 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-29) vds::refreshVmList vm id
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
>>> ovirt-node.domain-name ignoring it in the refresh till migration is
>>> done
>>> 2013-01-03 14:03:12,836 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) VM test002.domain-name
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom --> Up
>>> 2013-01-03 14:03:12,837 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) adding VM
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
>>> 2013-01-03 14:03:12,852 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) Rerun vm
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
>>> ovirt-node002.domain-name
>>> 2013-01-03 14:03:12,855 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
>>> 2013-01-03 14:03:12,864 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Failed in MigrateStatusVDS method
>>> 2013-01-03 14:03:12,865 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Error code migrateErr and error message
>>> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
>>> error = Fatal error during migration
>>> 2013-01-03 14:03:12,865 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Command
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
>>> return value
>>> Class Name:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>>> mStatus Class Name:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>>> mCode 12
>>> mMessage Fatal error during migration
>>>
>>>
>>> 2013-01-03 14:03:12,866 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
>>> 2013-01-03 14:03:12,867 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-48)
>>> Command MigrateStatusVDS execution failed. Exception:
>>> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
>>> MigrateStatusVDS, error = Fatal error during migration
>>> 2013-01-03 14:03:12,867 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-48) FINISH, MigrateStatusVDSCommand, log id: 4721a1f3
>>>
>>> Does anyone have any idea what this might be? I am using 3.1 from
>>> dreyou as these are CentOS 6 nodes
>>>
> any clue on which log on the new host ? I see the following in messages
VDSM is the virtualization agent. look at /var/log/vdsm/vdsm.log
>
> Jan 3 16:03:20 ovirt-node vdsm Storage.LVM WARNING lvm vgs failed: 5 [] [' Volume group "ab686999-f320-4a61-ae07-e99c2f858996" not found']
> Jan 3 16:03:20 ovirt-node vdsm Storage.StorageDomain WARNING Resource namespace ab686999-f320-4a61-ae07-e99c2f858996_imageNS already registered
> Jan 3 16:03:20 ovirt-node vdsm Storage.StorageDomain WARNING Resource namespace ab686999-f320-4a61-ae07-e99c2f858996_volumeNS already registered
> Jan 3 16:03:58 ovirt-node vdsm vm.Vm WARNING vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}' found
> Jan 3 16:03:58 ovirt-node vdsm vm.Vm WARNING vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}' found
> Jan 3 16:03:59 ovirt-node kernel: device vnet2 entered promiscuous mode
> Jan 3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering forwarding state
> Jan 3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering disabled state
> Jan 3 16:03:59 ovirt-node kernel: device vnet2 left promiscuous mode
> Jan 3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering disabled state
>
> and the following in the qemu log for that VM on the new node
>
> 2013-01-03 16:03:59.706+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Nehalem -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name test002.itvonline.ads -uuid 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-3.el6.centos.9,serial=55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08,uuid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/test002.itvonline.ads.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-01-03T16:03:58,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/rhev/data-center/bb0beebf-edab-41e2-83b8-16bdbbc5
> dda7/2a1939bd-9fa3-4896-b8a9-46234172aae7/images/e8711e5d-2f06-4c0f-b5c6-fa0806d7448f/0d93c51f-f838-4143-815c-9b3457d1a934,if=none,id=drive-virtio-disk0,format=raw,serial=e8711e5d-2f06-4c0f-b5c6-fa0806d7448f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=33 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:c0:2a:00,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/test002.itvonline.ads.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/test002.itvonline.ads.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -de
> vice virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 10.192.42.165:4,password -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49160 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> 2013-01-03 16:03:59.955+0000: shutting down
>
> but thats about it?
>
> thanks
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
11 years, 11 months
Re: [Users] What do you want to see in oVirt next?
by Trey Dockendorf
On Jan 3, 2013 4:15 PM, "Moran Goldboim" <mgoldboi(a)redhat.com> wrote:
>
> On 01/03/2013 07:42 PM, Darrell Budic wrote:
>>
>>
>> On Jan 3, 2013, at 10:25 AM, Patrick Hurrelmann wrote:
>>
>>> On 03.01.2013 17:08, Itamar Heim wrote:
>>>>
>>>> Hi Everyone,
>>>>
>>>>
>>>> as we wrap oVirt 3.2, I wanted to check with oVirt users on what they
>>>>
>>>> find good/useful in oVirt, and what they would like to see
>>>>
>>>> improved/added in coming versions?
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Itamar
>>>
>>>
>>> For me, I'd like to see official rpms for RHEL6/CentOS6. According to
>>> the traffic on this list quite a lot are using Dreyou's packages.
>>
>>
>> I'm going to second this strongly! Official support would be very much
appreciated. Bonus points for supporting a migration from the dreyou
packages. No offense to dreyou, of course, just rather be better supported
by the official line on Centos 6.x.
>
>
> EL6 rpms are planned to be delivered with 3.2 GA version, and nightly
builds from there on.
> hopefully we can push it to 3.2 beta.
>
> Moran.
>
>>
>>
>> Better support/integration of windows based SPICE clients would also be
much appreciated, I have many end users on Windows, and it's been a chore
to keep it working so far. This includes the client drivers for windows VMs
to support the SPICE display for multiple displays. More of a client side
thing, I know, but a desired feature in my environment.
>>
>> Thanks for the continued progress and support as well!
>>
>> -----------------
>> Darrell Budic
>> Zenfire
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
Will the EL6 releases also include an EL6 version of ovirt-node? If not
will the build dependencies for ovirt node be available to allow for custom
node iso builds?
- Trey
11 years, 11 months
[Users] Configure NFS resource from Host
by jj197005
Hello everybody,
I'm trying to configure and install oVirt for test in our University
faculty. I have installed an engine an a host, both of them with Fedora
17. The engine is working and I can log-in in the Data Center with admin
user. I have tried to add my host to the data center portal and I
successfully done. After that I have tried to add a NFS resource which
is in the host that I have added. If I open a console in the engine and
log-in as vdsm user, I can mount the NFS resource without problems. The
problem is when I tried to add this NFS resource to the Data Center.
I have followed the tutorial
http://www.ovirt.org/Quick_Start_Guide#Configure_Storage, but when I
have completed the form and pressed "Ok", after one or two minutes I
receive one screen with bellow error:
"Error: A Request to the Server failed with the following Status Code: 500"
I'm attaching the vdsm.log file with the lines that are created with
this operation. I hope that someone can show me what is the problem. If
you need more information aboout the installation I can show it.
Many thanks in avanced,
Juanjo.
11 years, 11 months
[Users] Update on dates for oVirt 3.2 Release
by Mike Burns
At today's oVirt meeting, we reviewed the dates for the oVirt 3.2
Release. The updated dates are:
Devel Freeze and Branching: 2013-01-14
Beta Posted: 2013-01-15
Test Day: 2013-01-24
Target GA: 2013-01-30
The are several reasons for the slip. It is partially due to the slip
in the Fedora 18 Release schedule. The move to Fedora 18 has also
caused some issues for some of the sub-projects, most notably
ovirt-node.
Please let me know if you have any questions or concerns
Thanks
Mike Burns
on behalf of
The oVirt Team
11 years, 11 months
[Users] oVirt Weekly Meeting Minutes -- 2013-01-09
by Mike Burns
#ovirt: oVirt Weekly Sync
Meeting started by mburns at 15:01:57 UTC (full logs).
Meeting summary
agenda and roll call (mburns, 15:02:02)
workshops (mburns, 15:04:33)
NetApp workshop (Jan 22-24) is mostly ready to go. USB keys ordered,
facilities arranged, schedule online (dneary, 15:06:54)
NetApp workshop (Jan 22-24) is mostly ready to go. USB keys ordered,
facilities arranged, schedule online (mburns, 15:07:28)
NetApp workshop (Jan 22-24) is mostly ready to go. USB keys ordered,
facilities arranged, schedule online (dneary, 15:07:34)
Current activities are mostly around organising of board meeting, and
co-ordinating burn-in of USB keys with latest pre-release version of 3.2
(dneary, 15:07:36)
Accommodation block has expired - if you need a room for the dates of
the conference, you should call up ASAP to ensure availability, and we
cannot guarantee the rate any more (dneary, 15:08:22)
Registration status: 63 registered, capacity is 100. ~20-25 Red Hatters,
~40 non-Red Hatters (dneary, 15:09:08)
Registration will close on January 15th, due to our requirement to
finalise numbers for catering, and get visitor badges made (dneary,
15:09:57)
Currently promoting the workshop to the Bay Area clouderati, and getting
some decent traction this week (some Citrix/CloudStack sign-ups, one
Inktank sign-up, and promising feedback from Cloudfoundry) (dneary,
15:11:50)
release status (mburns, 15:19:10)
not making 2013-01-09 release date (mburns, 15:21:02)
status update for ovirt-node (mburns, 15:21:11)
found some late breaking blocking issues with ovirt-node and move to F18
(mburns, 15:21:37)
some around selinux changes, some around various other component changes
(mburns, 15:21:53)
patches are in review now that should fix them (mburns, 15:23:03)
beta branch for all packages due by Monday January 14 (mburns, 15:38:12)
beta posted by Tuesday Jan 15? (mburns, 15:38:26)
test day 2013-01-24 (mburns, 15:40:56)
AGREED: release date target set for 30-Jan (mburns, 15:50:41)
ACTION: mburns to update release page and send communication to lists
(mburns, 15:52:24)
infra report (mburns, 15:55:40)
working on details of new hosting design (quaid, 15:56:17)
http://etherpad.ovirt.org/p/new_hosting_design_Jan_2013 (quaid,
15:57:47)
we'll talk on arch@ about service cutover dates & such (quaid, 15:58:31)
workshop - China (mburns, 16:00:27)
dates are set for workshop in China, but very early in the process
(20-21 March) (mburns, 16:03:29)
need call for content, discussion on whether workshops are the right way
to go about this (mburns, 16:03:48)
workshop will be in Shanghai (mburns, 16:04:50)
other topics (mburns, 16:06:13)
Meeting ended at 16:09:41 UTC (full logs).
Action items
mburns to update release page and send communication to lists
Action items, by person
mburns
mburns to update release page and send communication to lists
People present (lines said)
mburns (93)
dneary (45)
mgoldboi (33)
aglitke1 (15)
sgordon (14)
lh (11)
quaid (10)
ovirtbot (6)
karimb (5)
itamar (5)
oschreib_ (4)
jb_netapp (3)
itamar1 (2)
dustins (1)
garrett (1)
Generated by MeetBot 0.1.4.
11 years, 11 months
[Users] trouble with pci passthrough
by Andreas Huser
hi everybody
i have trouble with pci passthrough of a parallel port adapter. I need this for a key dongle.
The Server is a single machine and i want to use them with the all-in-one plugin from ovirt.
I do some tests with:
Fedora 17, CentOS6.3, Oracle Linux 6.3
latest kernel, qemu-kvm and libvirt from repos. No extras or advanced configurations. Only a simple standard Server.
I install "yum groupinstall virtualization" + virt-manager and some other.
I configure iommu, modul blacklist and some other.
Then i starting a Windows Server 2003 and assign the parallel adapter to the running server. I look in the device manager and found the adapter card.
The dongle work finde and the Datev Lizenz Service are online.
.. so far so good
but when i install on the same Server ovirt. With same kernel qemu-kvm and libvirt!
And i attach the adapter card to the windows server 2003 look in the device manager and found the card with a error "device cannot be start (code 10)"
I am now looking for several days after the error and have diverse tried but I can not keep going.
can someone help me?
Thanks & greetings
Andreas
11 years, 11 months
[Users] ovirt tools - compatibility with RHEV
by Jiri Belka
Hi,
I just discovered some package names difference between ovirt tools
and RHEV tools (iso-uploader etc...). Also params to handle certificates
are different.
Are ovirt tools compatible with RHEV? Or I should use RHEV specific
packages to manage RHEV environments? If ovirt tools are tested to
always work with RHEV that would be nice.
jbelka
11 years, 11 months
[Users] Best practice to resize a WM disk image
by Ricky
Hi,
If I have a VM that has run out of disk space, how can I increase the
space in best way? One way is to add a second bigger disk to the WM
and then use dd or similar to copy. But is it possible to stretch the
original disk inside or outside oVirt and get oVirt to know the bigger
size?
Regards //Ricky
11 years, 11 months