[Users] Unable to delete a snapshot
by Nicolas Ecarnot
Hi,
With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've
properly shutdown.
It ran so far.
Today, after having shut it down properly, I'm trying to delete the
snapshot and I get an error :
"Failed to delete snapshot 'blahblahbla' for VM 'myVM'."
The disk is thin provisionned, accessed via virtIO, nothing special.
The log below comes from the manager.
I hope someone could help us because this server is quite serious.
Thank you.
2014-01-06 10:10:58,826 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]
(ajp--127.0.0.1-8702-8) Lock Acquired to object EngineLock [exclu
siveLocks= key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM
, sharedLocks= ]
2014-01-06 10:10:58,837 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]
(ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotCommand internal:
false. Entities affected : ID: cb953dc1-c796-457a-99a1-0e54f1c0c338
Type: VM
2014-01-06 10:10:58,840 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotCommand]
(ajp--127.0.0.1-8702-8) Lock freed to object EngineLock [exclusiveLocks=
key: cb953dc1-c796-457a-99a1-0e54f1c0c338 value: VM
, sharedLocks= ]
2014-01-06 10:10:58,844 INFO
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand]
(ajp--127.0.0.1-8702-8) Running command: RemoveSnapshotSingleDiskCommand
internal: true. Entities affected : ID:
00000000-0000-0000-0000-000000000000 Type: Storage
2014-01-06 10:10:58,848 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand]
(ajp--127.0.0.1-8702-8) START, MergeSnapshotsVDSCommand( storagePoolId =
5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
storageDomainId = 11a077c7-658b-49bb-8596-a785109c24c9, imageGroupId =
69220da6-eeed-4435-aad0-7aa33f3a0d21, imageId =
506085b6-40e0-4176-a4df-9102857f51f2, imageId2 =
c50561d9-c3ba-4366-b2bc-49bbfaa4cd23, vmId =
cb953dc1-c796-457a-99a1-0e54f1c0c338, postZero = false), log id: 22d6503b
2014-01-06 10:10:59,511 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.MergeSnapshotsVDSCommand]
(ajp--127.0.0.1-8702-8) FINISH, MergeSnapshotsVDSCommand, log id: 22d6503b
2014-01-06 10:10:59,518 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-8)
CommandAsyncTask::Adding CommandMultiAsyncTasks object for command
b402868f-b7f9-4c0e-a6fd-bdc51ff49952
2014-01-06 10:10:59,519 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(ajp--127.0.0.1-8702-8) CommandMultiAsyncTasks::AttachTask: Attaching
task 6caec3bc-fc66-42be-a642-7733fc033103 to command
b402868f-b7f9-4c0e-a6fd-bdc51ff49952.
2014-01-06 10:10:59,525 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-8)
Adding task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command
RemoveSnapshot, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling
hasn't started yet..
2014-01-06 10:10:59,530 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-8) Correlation ID: 3b3e6fb1, Job ID:
53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID:
-1, Message: Snapshot 'Maj 47 60 vers 5.2.3' deletion for VM 'uc-674'
was initiated by necarnot.
2014-01-06 10:10:59,532 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(ajp--127.0.0.1-8702-8) BaseAsyncTask::StartPollingTask: Starting to
poll task 6caec3bc-fc66-42be-a642-7733fc033103.
2014-01-06 10:11:01,811 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-20) Polling and updating Async Tasks: 2
tasks, 1 tasks to poll now
2014-01-06 10:11:01,824 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-20) Failed in HSMGetAllTasksStatusesVDS
method
2014-01-06 10:11:01,825 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-20) Error code GeneralException and error
message VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2'
2014-01-06 10:11:01,826 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-20) SPMAsyncTask::PollTask: Polling task
6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command RemoveSnapshot,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
status finished, result 'cleanSuccess'.
2014-01-06 10:11:01,829 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-20) BaseAsyncTask::LogEndTaskFailure:
Task 6caec3bc-fc66-42be-a642-7733fc033103 (Parent Command
RemoveSnapshot, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
failure:^M
-- Result: cleanSuccess^M
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2',^M
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = '506085b6-40e0-4176-a4df-9102857f51f2'
2014-01-06 10:11:01,832 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-20)
CommandAsyncTask::EndActionIfNecessary: All tasks of command
b402868f-b7f9-4c0e-a6fd-bdc51ff49952 has ended -> executing EndAction
2014-01-06 10:11:01,833 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-20) CommandAsyncTask::EndAction: Ending
action for 1 tasks (command ID: b402868f-b7f9-4c0e-a6fd-bdc51ff49952):
calling EndAction .
2014-01-06 10:11:01,834 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::EndCommandAction [within thread] context: Attempting
to EndAction RemoveSnapshot, executionIndex: 0
2014-01-06 10:11:01,839 ERROR
[org.ovirt.engine.core.bll.RemoveSnapshotCommand] (pool-6-thread-27)
[3b3e6fb1] Ending command with failure:
org.ovirt.engine.core.bll.RemoveSnapshotCommand
2014-01-06 10:11:01,844 ERROR
[org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand]
(pool-6-thread-27) [33fa2a5d] Ending command with failure:
org.ovirt.engine.core.bll.RemoveSnapshotSingleDiskCommand
2014-01-06 10:11:01,848 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-27) Correlation ID: 3b3e6fb1, Job ID:
53867ef7-d767-45d2-b446-e5d3f5584a19, Call Stack: null, Custom Event ID:
-1, Message: Failed to delete snapshot 'Maj 47 60 vers 5.2.3' for VM
'uc-674'.
2014-01-06 10:11:01,850 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for
action type RemoveSnapshot completed, handling the result.
2014-01-06 10:11:01,851 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::HandleEndActionResult [within thread]: EndAction for
action type RemoveSnapshot succeeded, clearing tasks.
2014-01-06 10:11:01,853 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(pool-6-thread-27) SPMAsyncTask::ClearAsyncTask: Attempting to clear
task 6caec3bc-fc66-42be-a642-7733fc033103
2014-01-06 10:11:01,853 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(pool-6-thread-27) START, SPMClearTaskVDSCommand( storagePoolId =
5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false,
taskId = 6caec3bc-fc66-42be-a642-7733fc033103), log id: 424e7cf
2014-01-06 10:11:01,873 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(pool-6-thread-27) START, HSMClearTaskVDSCommand(HostName =
serv-vm-adm9, HostId = ba48edd4-c528-4832-bda4-4ab66245df24,
taskId=6caec3bc-fc66-42be-a642-7733fc033103), log id: 12eec929
2014-01-06 10:11:01,884 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(pool-6-thread-27) FINISH, HSMClearTaskVDSCommand, log id: 12eec929
2014-01-06 10:11:01,885 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(pool-6-thread-27) FINISH, SPMClearTaskVDSCommand, log id: 424e7cf
2014-01-06 10:11:01,886 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(pool-6-thread-27) BaseAsyncTask::RemoveTaskFromDB: Removed task
6caec3bc-fc66-42be-a642-7733fc033103 from DataBase
2014-01-06 10:11:01,887 INFO
[org.ovirt.engine.core.bll.CommandAsyncTask] (pool-6-thread-27)
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
b402868f-b7f9-4c0e-a6fd-bdc51ff49952
2014-01-06 10:11:07,703 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-9) Setting new tasks map. The map
contains now 1 tasks
2014-01-06 10:12:07,703 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-99) Setting new tasks map. The map
contains now 0 tasks
2014-01-06 10:12:07,704 INFO
[org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-99) Cleared all tasks of pool
5849b030-626e-47cb-ad90-3ce782d831b3.
--
Nicolas Ecarnot
--
Nicolas Ecarnot
10 years, 6 months
[Users] oVirt Node Bootloader issue
by Nauman Abbas
Hello there
I'm facing this problem and this has happened to me multiple times on
different machines. I have two hard drives in my system (lets say HDD A and
HDD B). I boot the system from the oVirt node CD to install it. Now I want
the bootloader and the node installation both to go on HDD B and not HDD A.
When I get to the screen which asks me to select the hard-drive for
bootloader I use the arrow keys to select HDD B and press Enter (have also
tried pressing space or just use tab and continue while it's selected).
Then I get to the screen asking where to install the node. I select HDD B
and unselect HDD A. I keep the partitioning setup as default ( have also
tried manual and kept 5GB unused on HDD A). When I get to install part. It
installs the node fine on HDD B as expected but it always gets the
boot-loader on HDD A. This issue has wasted many of my days on different
machines and I can't seem to find a way around.
Nauman Abbas
Assistant System Administrator (LMS),
Room No. A-207, SEECS,
National University of Sciences & Technology,
+ 92 321 5359946
10 years, 6 months
[Users] VM Status "Unknown"
by Ryan Womer
--_000_6F811DD54D1A4D478D542041B5BC6A0B01D0C2B5A6CYEX01CytechS_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
During a migration, the destination host lost connectivity to the san and c=
rashed.
Once the server came back up, 3 VMs that didn't finish migrating have been =
stuck in status "Unknown." Vdsclient doesn't list any of the vms on eithe=
r host. Qemu doesn't have them listed as mounted on either host. Action =
vm start and stop result in "Status: 409".
The disks for all 3 VMs are listed as green in the WebAdmin. I've tried "a=
ction vm <name> start" "action vm <name> stop" "update vm <name> --status-=
state down" no joy. They remain in "unknown."
--_000_6F811DD54D1A4D478D542041B5BC6A0B01D0C2B5A6CYEX01CytechS_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">During a migration, the destination host lost connec=
tivity to the san and crashed.
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Once the server came back up, 3 VMs that didn’=
t finish migrating have been stuck in status “Unknown.” &n=
bsp; Vdsclient doesn’t list any of the vms on either host.  =
; Qemu doesn’t have them listed as mounted on either host. Acti=
on vm start and
stop result in “Status: 409”. <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The disks for all 3 VMs are listed as green in the W=
ebAdmin. I’ve tried “action vm <name> start”&=
nbsp; “action vm <name> stop” “update vm <name&g=
t; --status-state down” no joy. They remain in “unk=
nown.”<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_6F811DD54D1A4D478D542041B5BC6A0B01D0C2B5A6CYEX01CytechS_--
10 years, 8 months
[Users] oVirt/RHEV Android client (Opaque) available for beta testing
by i iordanov
Hello,
We invite any interested oVirt/RHEV developers and administrators to
beta-test Opaque, a new Android oVirt/RHEV client application.
To opt in, please reply to this message with an email address associated
with a Google Account, because joining the beta-test group is based on
membership to a Google Plus community. If you don't want that email address
posted to the mailing list, don't include it in your reply!
Itamar or I will add you to the community and let you know that you can
proceed to the following two steps:
1) Please visit this page here to accept the invitation:
https://plus.google.com/communities/116099119712127782216
2) Once you've become a member of the Google+ group, to opt-in, visit:
https://play.google.com/apps/testing/com.undatech.opaquebeta
You will be able to download Opaque from Google Play by following the link
at the bottom of the opt-in page.
Please share your experiences with Opaque to the mailing list!
Cheers,
iordan
--
The conscious mind has only one thread of execution.
10 years, 8 months
[Users] Problem with DWH installation
by Michael Wagenknecht
Hi,
I cannot install the Ovirt DWH.
Here is the logfile:
2013-11-05 15:00:12::DEBUG::ovirt-engine-dwh-setup::250::root:: starting
main()
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB host value
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB port value
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB admin value
2013-11-05 15:00:12::DEBUG::common_utils::415::root:: found existing
pgpass file, fetching DB admin value
2013-11-05 15:00:12::DEBUG::common_utils::448::root:: getting DB
password for postgres
2013-11-05 15:00:12::DEBUG::common_utils::457::root:: found password for
username postgres
2013-11-05 15:00:12::DEBUG::common_utils::58::root:: getting vdc option
MinimalETLVersion
2013-11-05 15:00:12::DEBUG::common_utils::512::root:: Executing command
--> '['/usr/bin/engine-config', '-g', 'MinimalETLVersion',
'--cver=general', '-p',
'/usr/share/ovirt-engine/conf/engine-config-install.properties']'
2013-11-05 15:00:13::DEBUG::common_utils::551::root:: output =
2013-11-05 15:00:13::DEBUG::common_utils::552::root:: stderr = Files
/usr/share/ovirt-engine/conf/engine-config-install.properties does not exist
2013-11-05 15:00:13::DEBUG::common_utils::553::root:: retcode = 1
2013-11-05 15:00:13::ERROR::ovirt-engine-dwh-setup::294::root::
Exception caught!
2013-11-05 15:00:13::ERROR::ovirt-engine-dwh-setup::295::root::
Traceback (most recent call last):
File "/usr/bin/ovirt-engine-dwh-setup", line 255, in main
minimalVersion = utils.getVDCOption("MinimalETLVersion")
File "/usr/share/ovirt-engine-dwh/common_utils.py", line 60, in
getVDCOption
output, rc = execCmd(cmdList=cmd, failOnError=True, msg="Error:
failed fetching configuration field %s" % key)
File "/usr/share/ovirt-engine-dwh/common_utils.py", line 556, in execCmd
raise Exception(msg)
Exception: Error: failed fetching configuration field MinimalETLVersion
This are the installed packages:
ovirt-engine-dwh-3.2.0-1.el6.noarch
ovirt-release-el6-8-1.noarch
ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
ovirt-host-deploy-java-1.1.1-1.el6.noarch
ovirt-engine-dbscripts-3.3.0.1-1.el6.noarch
ovirt-engine-reports-3.2.0-2.el6.noarch
ovirt-engine-lib-3.3.0.1-1.el6.noarch
ovirt-engine-setup-3.3.0.1-1.el6.noarch
ovirt-log-collector-3.3.1-1.el6.noarch
ovirt-image-uploader-3.3.1-1.el6.noarch
ovirt-host-deploy-1.1.1-1.el6.noarch
ovirt-engine-webadmin-portal-3.3.0.1-1.el6.noarch
ovirt-engine-restapi-3.3.0.1-1.el6.noarch
ovirt-engine-tools-3.3.0.1-1.el6.noarch
ovirt-engine-backend-3.3.0.1-1.el6.noarch
ovirt-engine-cli-3.3.0.4-1.el6.noarch
ovirt-iso-uploader-3.3.1-1.el6.noarch
ovirt-engine-userportal-3.3.0.1-1.el6.noarch
ovirt-engine-3.3.0.1-1.el6.noarch
Can you help me?
Best regards,
Michael
10 years, 8 months
[Users] Migration of Windows
by Rick Ingersoll
--_004_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_
Content-Type: multipart/alternative;
boundary="_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_"
--_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
I have an ovirt 3.1 build. I have 3 hosts and a few virtua=
l machines setup that I am using for testing. I am using gluster storage s=
etup as a distribution between the 3 hosts. I can migrate linux guests acr=
oss my 3 hosts, but I cannot migrate windows hosts. I get "Migration faile=
d due to Error: Fatal error during migration. The event id is 65. Is ther=
e something additional that needs to be done to windows guests for them to =
support live migration?
Thanks,
Rick Ingersoll
IT Consultant
(919) 234-5100 main
(919) 234-5101 direct
(919) 757-5605 mobile
(919) 747-7409 fax
rick.ingersoll(a)mjritsolutions.com<mailto:rick.ingersoll@mjritsolutions.com>
http://www.mjritsolutions.com<http://www.mjritsolutions.com/>
[mjritsolutions_logo_signature]
--_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi,<o:p></o:p></p>
<p class=3D"MsoNormal"> &nbs=
p; I have an ovirt 3.1 build. I h=
ave 3 hosts and a few virtual machines setup that I am using for testing.&n=
bsp; I am using gluster storage setup as a distribution between the 3 hosts=
. I can migrate linux guests across my 3 hosts,
but I cannot migrate windows hosts. I get “Migration failed du=
e to Error: Fatal error during migration. The event id is 65. I=
s there something additional that needs to be done to windows guests for th=
em to support live migration?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thanks, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"=
Arial","sans-serif";color:black">Rick Ingersoll<o:p></o:p></=
span></b></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:9.0pt;font-family:"=
Arial","sans-serif";color:#262626">IT Consultant<o:p></o:p><=
/span></b></p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al","sans-serif"">(919) 234-5100 main<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al","sans-serif"">(919) 234-5101 direct<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al","sans-serif"">(919) 757-5605 mobile<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al","sans-serif"">(919) 747-7409 fax<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al","sans-serif""><a href=3D"mailto:rick.ingersoll@mjritsolu=
tions.com"><span style=3D"color:blue">rick.ingersoll(a)mjritsolutions.com</sp=
an></a><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:9.0pt;font-family:"Ari=
al","sans-serif""><a href=3D"http://www.mjritsolutions.com/"=
><span style=3D"color:blue">http://www.mjritsolutions.com</span></a><o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><img border=3D"0" width=3D"200" height=3D"42" id=3D"=
Picture_x0020_1" src=3D"cid:image001.jpg@01CE75E9.B5CA8020" alt=3D"mjritsol=
utions_logo_signature"><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_--
--_004_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_
Content-Type: image/jpeg; name="image001.jpg"
Content-Description: image001.jpg
Content-Disposition: inline; filename="image001.jpg"; size=2146;
creation-date="Mon, 01 Jul 2013 03:29:49 GMT";
modification-date="Mon, 01 Jul 2013 03:29:49 GMT"
Content-ID: <image001.jpg(a)01CE75E9.B5CA8020>
Content-Transfer-Encoding: base64
/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMg
IyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAAqAMgDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2aiii
gAooooAKKKSgBaSisTxpLJD4N1aSGR45FtmKujYIOOxoYm7K5t155p+t+KNS8VX8QuooNJtLp4tx
gXc+D91T/M9q4Pw5DrWszebLqmoR2aH5nFw+XP8AdHP5mvbtGtoRpFr+6T/VjkjJ+ue5qE+bUzjJ
1NdjRpaKKs1CiiigAooooAKKKKACiiigAooooAKKKKACiiigBKK4fXrvU7TxpLc6czyraWaTS2u4
4lj3EMAOme/4VS8V+I59c0mWbR5ZItNtvLMswJVpZGIwg/3c5PvQB6LRVOy1S1vrq6treXfNaEJM
NpG0kdM9+navM0u5n0LTfOmvJEfVpUkWGRvMdcD5Rzn6UAesV5z44+Iuo+F/ERsLS1tJIvJSTdKG
zk59CPSq7yQ3Oo21v4fi1xL+G5QzLcSthI887gWPtVm/giufjOI54o5U/swna6hhnnsamd7aGdS9
tDnf+F0az/z46d+T/wDxVXdP8cap40zpN7bWUNjdkQSvFu34PXbk9feqfw91W7v9XsNPurKwfTwG
i3NapvYqhI+bqTxyavaVdTahq+mSTraoEuQwENqkZ6kYyOcVEbvqZQ5pbs7eHwdaW8KQwzSJGg2q
qqMAVuWtuLW1jgViwjXaCepritfs76TxPdSS2WqXtqY4xCLK68sIcfNkZ9apQ6o+hJqr28OqWl4l
mJEt7+USpjeBvHOc81qdJ6RRXK3q+ILPRJtQ/tuF/KgM2z7EozgZxnNSaBrl5qWtyQXDJ5IsILgK
q4w7jJ59KAOmorD1HU7m28VaRYxsot7lJTKCuSSoyOe1M1TUb5PFFjptrOsUdzbSsWMYbDDof/rU
Ab9FcgkniB/Ekmk/2zCNlqLjzfsa85bGMZqDVvEeq6dNq0CTxO9qtqkbmIfecgM2P6UAdtRXJ64f
EGi6Nc6h/bMM32dd3lmzVd3IGM7veq6+LdQtNVvpLq28/SoGjV3iX57fcgbJHdefwoA7SiuLvPE2
oTNqzaTKtxDCbbyGii8zaj/fbA5bHpSWWv3v9qWcc2qStHNMIytxpbQhs9g2etAHaUtV74kWFwQS
CImwR9K4PSNd1ez0Tw6lggvHuEuHlikbLShG7MehxnFAHolFcm/jBb260z+zpAFlaZbmCRcOhWMs
FI6jkVHps3iLVPDkeqprFtE0kTSCJrVQoIzwWzwOOtAHYUVXsHmewt2ujGZzGpkMZypbHOPaigCJ
dKt11l9TG/7Q8IgIz8u0HPT1qC98O6ffaQ2mmLybVn3lYfl5zu/nWpRQBz934M0+6v57xZ763lnY
NIILgoGPrinweD9MtoLKKLzwLO4NzGTJks57t61u0UAZWo+HrPUb6C+YywXkH3Z4H2MR6H1Hsa4b
xPHqlr8SG1HT42CpZLEZGhLg5zkADvXp1FJq5Mo3PH7C3vdLltZbKKVJbX7ha3LA5Ug5H41Z0Oyn
g1OwVoZiEmXLGMjvXq9FMdkjAvvB9hfahNe+fewTT4MnkTlAxAwDj6UyLwRpaJdLI93ObmLyXeac
swXOcA9uRXRUUDOefwbbywNDJqervEy7SjXhKkemPSnyeELI3QuILm/tXEKQ/wCjzlMoowBW9RQB
gyeErWYQNJe6k08DM0c5uT5ihgARu9OKlsvDNtZ6lHftc3tzcRoURrmcvtB64rZooApLpVuusvqg
L/aHhEB+b5doOenrmqd94W0/UJb6Sfzt16IxJtfGNn3SvpWzRQBz0/g22uoWhudS1eaF+GjkvGKs
PQitSy0m2sJ7qWENm6KmQMcj5V2jA+gq7RQBgf8ACGaUn2j7Os9r58iSnyJSmxlzgrjp1PFOHhS2
M8Mtxe6ldeQ4kRJ7ksoYdDj8a3aKAMfQdFbTdEayuJHcyM7NlyxUMfuhj1wO9Fj4YsdPOneQZv8A
iXrIsO588P8Aez61sUUAZVx4b0651eLU3h23UYZSyHG8EEfMO/BNUF8EWSWhtUv9VS2KlfJW7Oza
eox6V0lFAGXaeHrOxv4buAzK8NstqiGQlAg6cevvRWpRQB//2Q==
--_004_af731bceb043497d9dd496522e1ab3bcBL2PR08MB196namprd08pro_--
10 years, 9 months
[Users] Vm's being paused
by Neil
Hi guys,
I've had two different Vm's randomly pause this past week and inside ovirt
the error received is something like 'vm ran out of storage and was
paused'. Resuming the vm's didn't work and I had to force them off and then
on which resolved the issue.
Has anyone had this issue before?
I realise this is very vague so if you could please let me know which logs
to send in.
Thank you
Regards.
Neil Wilson
10 years, 9 months
[Users] API read-only access / roles
by Sander Grendelman
I'm working on (Zabbix) monitoring through the RESTful API.
Which role should I assign to the monitoring user?
The user only needs read access to the data but it looks like
I nead to assign at least an "Admin" role to the user to be
able to read data through the API.
For this I've created a "AdminLoginOnly" role that only has
System->Configure System->Login Permissions access.
Is this the way to go for this king of configuration? Or is there
a way to further minimize the permissions of this user?
Another issue is that a "Login" event is generated every time
the user connects through the API. This makes the "Events"
pane less useful / readable. Is there a way to disable this for
some users/roles?
10 years, 9 months
[Users] "Volume Group does not exist". Blame device-mapper ?
by Nicolas Ecarnot
Hi,
oVirt 3.3, no big issue since the recent snapshot joke, but all in all
running fine.
All my VM are stored in a iSCSI SAN. The VM usually are using only one
or two disks (1: system, 2: data) and it is OK.
Friday, I created a new LUN. Inside a VM, I linked to it via iscsiadm
and successfully login to the Lun (session, automatic attach on boot,
read, write) : nice.
Then after detaching it and shuting down the MV, and for the first time,
I tried to make use of the feature "direct attach" to attach the disk
directly from oVirt, login the session via oVirt.
I connected nice and I saw the disk appear in my VM as /dev/sda or
whatever. I was able to mount it, read and write.
Then disaster stoke all this : many nodes suddenly began to become
unresponsive, quickly migrating their VM to the remaining nodes.
Hopefully, the migrations ran fine and I lost no VM nor downtime, but I
had to reboot every concerned node (other actions failed).
In the failing nodes, /var/log/messages showed the log you can read in
the end of this message.
I first get device-mapper warnings, then the host unable to collaborate
with the logical volumes.
The 3 volumes are the three main storage domains, perfectly up and
running where I store my oVirt VMs.
My reflexions :
- I'm not sure device-mapper is to blame. I frequently see device mapper
complaining and nothing is getting worse (not oVirt specifically)
- I have not change my network settings for months (bonding, linking...)
The only new factor is the usage of direct attach LUN.
- This morning I was able to reproduce the bug, just by trying again
this attachement, and booting the VM. No mounting of the LUN, just VM
booting, waiting, and this is enough to crash oVirt.
- when the disaster happens, usually, amongst the nodes, only three
nodes gets stroke, the only one that run VMs. Obviously, after
migration, different nodes are hosting the VMs, and those new nodes are
the one that then get stroke.
This is quite reproductible.
And frightening.
The log :
Jan 20 10:20:45 serv-vm-adm11 kernel: device-mapper: table: 253:36:
multipath: error getting device
Jan 20 10:20:45 serv-vm-adm11 kernel: device-mapper: ioctl: error adding
target to table
Jan 20 10:20:45 serv-vm-adm11 kernel: device-mapper: table: 253:36:
multipath: error getting device
Jan 20 10:20:45 serv-vm-adm11 kernel: device-mapper: ioctl: error adding
target to table
Jan 20 10:20:47 serv-vm-adm11 vdsm TaskManager.Task ERROR
Task=`847653e6-8b23-4429-ab25-257538b35293`::Unexpected
error#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/storage/task.py", line 857, in _run#012 return
fn(*args, **kargs)#012 File "/usr/share/vdsm/logUtils.py", line 45, in
wrapper#012 res = f(*args, **kwargs)#012 File
"/usr/share/vdsm/storage/hsm.py", line 3053, in getVolumeSize#012
volUUID, bs=1))#012 File "/usr/share/vdsm/storage/volume.py", line 333,
in getVSize#012 mysd = sdCache.produce(sdUUID=sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 98, in produce#012
domain.getRealDomain()#012 File "/usr/share/vdsm/storage/sdc.py", line
52, in getRealDomain#012 return
self._cache._realProduce(self._sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce#012
domain = self._findDomain(sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain#012 dom =
findMethod(sdUUID)#012 File "/usr/share/vdsm/storage/blockSD.py", line
1288, in findDomain#012 return
BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))#012 File
"/usr/share/vdsm/storage/blockSD.py", line 414, in __init__#012
lvm.checkVGBlockSizes(sdUUID, (self.logBlkSize, self.phyBlkSize))#012
File "/usr/share/vdsm/storage/lvm.py", line 976, in
checkVGBlockSizes#012 raise se.VolumeGroupDoesNotExist("vg_uuid: %s"
% vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist:
('vg_uuid: 1429ffe2-4137-416c-bb38-63fd73f4bcc1',)
Jan 20 10:20:47 serv-vm-adm11 ¿<11>vdsm vm.Vm ERROR
vmId=`2c0bbb51-0f94-4bf1-9579-4e897260f88e`::Unable to update the volume
80bac371-6899-4fbe-a8e1-272037186bfb (domain:
1429ffe2-4137-416c-bb38-63fd73f4bcc1 image:
a5995c25-cdc9-4499-b9b4-08394a38165c) for the drive vda
Jan 20 10:20:48 serv-vm-adm11 vdsm TaskManager.Task ERROR
Task=`886e07bd-637b-4286-8a44-08dce5c8b207`::Unexpected
error#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/storage/task.py", line 857, in _run#012 return
fn(*args, **kargs)#012 File "/usr/share/vdsm/logUtils.py", line 45, in
wrapper#012 res = f(*args, **kwargs)#012 File
"/usr/share/vdsm/storage/hsm.py", line 3053, in getVolumeSize#012
volUUID, bs=1))#012 File "/usr/share/vdsm/storage/volume.py", line 333,
in getVSize#012 mysd = sdCache.produce(sdUUID=sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 98, in produce#012
domain.getRealDomain()#012 File "/usr/share/vdsm/storage/sdc.py", line
52, in getRealDomain#012 return
self._cache._realProduce(self._sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce#012
domain = self._findDomain(sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain#012 dom =
findMethod(sdUUID)#012 File "/usr/share/vdsm/storage/blockSD.py", line
1288, in findDomain#012 return
BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))#012 File
"/usr/share/vdsm/storage/blockSD.py", line 414, in __init__#012
lvm.checkVGBlockSizes(sdUUID, (self.logBlkSize, self.phyBlkSize))#012
File "/usr/share/vdsm/storage/lvm.py", line 976, in
checkVGBlockSizes#012 raise se.VolumeGroupDoesNotExist("vg_uuid: %s"
% vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist:
('vg_uuid: 1429ffe2-4137-416c-bb38-63fd73f4bcc1',)
Jan 20 10:20:48 serv-vm-adm11 ¿<11>vdsm vm.Vm ERROR
vmId=`2c0bbb51-0f94-4bf1-9579-4e897260f88e`::Unable to update the volume
ea9c8f12-4eb6-42de-b6d6-6296555d0ac0 (domain:
1429ffe2-4137-416c-bb38-63fd73f4bcc1 image:
f42e0c9d-ad1b-4337-b82c-92914153ff44) for the drive vdb
Jan 20 10:21:03 serv-vm-adm11 vdsm TaskManager.Task ERROR
Task=`27bb14f9-0cd1-4316-95b0-736d162d5681`::Unexpected
error#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/storage/task.py", line 857, in _run#012 return
fn(*args, **kargs)#012 File "/usr/share/vdsm/logUtils.py", line 45, in
wrapper#012 res = f(*args, **kwargs)#012 File
"/usr/share/vdsm/storage/hsm.py", line 3053, in getVolumeSize#012
volUUID, bs=1))#012 File "/usr/share/vdsm/storage/volume.py", line 333,
in getVSize#012 mysd = sdCache.produce(sdUUID=sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 98, in produce#012
domain.getRealDomain()#012 File "/usr/share/vdsm/storage/sdc.py", line
52, in getRealDomain#012 return
self._cache._realProduce(self._sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce#012
domain = self._findDomain(sdUUID)#012 File
"/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain#012 dom =
findMethod(sdUUID)#012 File "/usr/share/vdsm/storage/blockSD.py", line
1288, in findDomain#012 return
BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))#012 File
"/usr/share/vdsm/storage/blockSD.py", line 414, in __init__#012
lvm.checkVGBlockSizes(sdUUID, (self.logBlkSize, self.phyBlkSize))#012
File "/usr/share/vdsm/storage/lvm.py", line 976, in
checkVGBlockSizes#012 raise se.VolumeGroupDoesNotExist("vg_uuid: %s"
% vgUUID)#012VolumeGroupDoesNotExist: Volume Group does not exist:
('vg_uuid: 83d39199-d4e4-474c-b232-7088c76a2811',)
--
Nicolas Ecarnot
10 years, 9 months