vm's not shutting down from admin portal
by Jeff Clay
When selecting to shutdown vm's from the admi portal, it often doesn't work
although, sometimes it does. These machines are all stateless and in the
same pool, yet sometimes they will shutdown from the portal, most of the
time they don't. here's what I see in engine.log when they don't shutdown.
2014-05-19 18:17:42,477 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [4d427221] Correlation ID: 4d427221, Job
ID: ce662a5c-9474-4406-90f5-e941e130b47d, Call Stack: null, Custom Event
ID: -1, Message: VM shutdown initiated by Jeff.Clay on VM USAROVRTVZ-13
(Host: USARPAOVRTHOST02).
2014-05-19 18:22:45,333 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-53) VM USAROVRTVZ-13
67a51ec0-659d-4372-b4f1-85a56e6c0992 moved from PoweringDown --> Up
2014-05-19 18:22:45,381 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-53) Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: Shutdown of VM USAROVRTVZ-13 failed.
10 years, 6 months
power outage: HA vms not restarted
by Yuriy Demchenko
Hi,
i'm running ovirt-3.2.2-el6 on 18 el6 hosts with FC san storage, 46 HA
vms in 2 datacenters (3 hosts uses different storage with no
connectivity to first storage, that's why second DC)
Recently (2014-05-17) i had a double power outage: first blackout at
00:16, went back at ~00:19, second blackout at 00:26, went back at 10:06
When finally all went up (after approx. 10:16) - only 2 vms were
restarted from 46.
From browsing engine log i saw failed restart attemts of almost all vms
after first blackout with error 'Failed with error ENGINE and code
5001', but after second blackout i saw no attempts to restart vms, and
only error was 'connect timeout' (probably to srv5 - that host
physically died after blackouts).
And i cant figure why HA vms were not restarted? Please advice
engine and (supposedly) spm host logs in attach.
--
Yuriy Demchenko
10 years, 6 months
oVirt 3.5.0 Alpha postponed due to blocker
by Sandro Bonazzola
Hi,
We're postponing oVirt 3.5.0 Alpha release due to the following blocker bug discovered while running basic sanity tests:
Bug 1098539 - failed to create VM if no NUMA set is specified
On a clean install no VM can be created.
New tentative release date for 3.5.0 Alpha is 2014-05-20.
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 6 months
[ANN] oVirt 3.5.0 Alpha is now available for testing
by Sandro Bonazzola
The oVirt team is pleased to announce that the 3.5.0 Alpha is now
available for testing.
Feel free to join us testing it!
You'll find all needed info for installing it on the release notes page,
already available on the wiki [1].
A new oVirt Node build will be available soon as well.
Please note that mirrors will need a couple of days before being synchronized:
they currently have the first build we prepared on 2014-05-16.
If you want to be sure to use latest rpms and don't want to wait for the mirrors,
you can edit /etc/yum.repos.d/ovirt-3.5.repo commenting the mirror line and
removing the comment on baseurl line.
Known Issues
VDSM packages released with the first 3.5.0 alpha have version lower than the ones we had in 3.4.1 so they won't be updated.
You can't add hosts to 3.5 clusters until a new VDSM build with 3.5 compatibility level will be released (All in One won't work).
[1] http://www.ovirt.org/OVirt_3.5_Release_Notes
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 6 months
Template creation, Network overhead?
by Daniel Helgenberger
--=-agOw3I86inzrd9sXeocZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Greetings,
I have a question concerning the template creation process.
I am creating a template from a windows server with a 1TB thin prov.
boot disk; about 9GB of witch is actually used.
The Data Storage Domain copied from and to is a the same NFS domain.
It seems to me after the initial write process the whole 1TB of zeros is
read back by the SPM? I can confirm my storage appliance is running on
100% network interface speed with almost zero Disk IO.=20
Also, sampling done on the SPM with iftop reveals reading (RX) from my
storage appliance on interface speed (I watched it go up to 75GB right
now).
Is this normal behavior? Is there a way to cancel the template creation?
Thanks,
--=20
Daniel Helgenberger=20
m box bewegtbild GmbH=20
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19=20
D-10115 BERLIN=20
www.m-box.de www.monkeymen.tv=20
Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20
--=-agOw3I86inzrd9sXeocZ
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIINtjCCBBYw
ggL+oAMCAQICCwQAAAAAAS9O4S9SMA0GCSqGSIb3DQEBBQUAMFcxCzAJBgNVBAYTAkJFMRkwFwYD
VQQKExBHbG9iYWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwHhcNMTEwNDEzMTAwMDAwWhcNMTkwNDEzMTAwMDAwWjBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMiBDQSAtIEcyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwWtB+TXs+BJ9
3SJRaV+3uRNGJ3cUO+MTgW8+5HQXfgy19CzkDI1T1NwwICi/bo4R/mYR5FEWx91//eE0ElC/89iY
7GkL0tDasmVx4TOXnrqrsziUcxEPPqHRE8x4NhtBK7+8o0nsMIJMA1gyZ2FA5To2Ew1BBuvovvDJ
+Nua3qOCNBNu+8A+eNpJlVnlu/qB7+XWaPXtUMlsIikxD+gREFVUgYE4VzBuLa2kkg0VLd09XkE2
ceRDm6YgRATuDk6ogUyX4OLxCGIJF8yi6Z37M0wemDA6Uff0EuqdwDQd5HwG/rernUjt1grLdAxq
8BwywRRg0eFHmE+ShhpyO3Fi+wIDAQABo4HlMIHiMA4GA1UdDwEB/wQEAwIBBjASBgNVHRMBAf8E
CDAGAQH/AgEAMB0GA1UdDgQWBBQ/FdJtfC/nMZ5DCgaolGwsO8XuZTBHBgNVHSAEQDA+MDwGBFUd
IAAwNDAyBggrBgEFBQcCARYmaHR0cHM6Ly93d3cuZ2xvYmFsc2lnbi5jb20vcmVwb3NpdG9yeS8w
MwYDVR0fBCwwKjAooCagJIYiaHR0cDovL2NybC5nbG9iYWxzaWduLm5ldC9yb290LmNybDAfBgNV
HSMEGDAWgBRge2YaRQ2XyolQL30EzTSo//z9SzANBgkqhkiG9w0BAQUFAAOCAQEAQ3N5zKTMSTED
HGFAgd/gu91Kb8AxPHgjq+7dhf7mkCinMqqrLai2XOrz8CP63BPaAx7oGOUBI0MyASBGk5zej9L3
oHtiF2BL01m1sBnT8rQxT2CJd/+jqjUl0p2ew8T3HSyatrsooGvDwf00yCB2JHTNvtQxNO8t6x/+
048A1Q+0i7uf0nTnyrJLjD04zhL89ytetZspltOpJVYbmwiFjq6PxsdUNthUDme/9pOLmKDnQU0p
W/JEwLs2TYCBNKwdgSGAk8/z+s2SCltKIG0Uh5U6t6j7JPuwNP/znImwMrlHDJ1YpW0rkF2PGraV
CgDBf9dOB+IIpnwHfIi+LD+eITCCBMowggOyoAMCAQICEQCWaWbA3qWpL+Qmn6I16DynMA0GCSqG
SIb3DQEBBQUAMFQxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSowKAYD
VQQDEyFHbG9iYWxTaWduIFBlcnNvbmFsU2lnbiAyIENBIC0gRzIwHhcNMTMwODI3MTY1NzU4WhcN
MTYwODI3MTY1NzU4WjBYMQswCQYDVQQGEwJERTEcMBoGA1UEAxMTRGFuaWVsIEhlbGdlbmJlcmdl
cjErMCkGCSqGSIb3DQEJARYcZGFuaWVsLmhlbGdlbmJlcmdlckBtLWJveC5kZTCCASIwDQYJKoZI
hvcNAQEBBQADggEPADCCAQoCggEBAM4BQ5vPknk1OGLd1qKSUIKmQLrjccjJcYj7qtAtA+fNYKF8
9p1VY4UwiFcF9jKlmA9Q8o8tYSx16LYYFoGWokNRAeKFXZiBZiHyI0ekpEfxo8N5cTMCcxKcSYWV
8sqzmBPCoMNpmiVoC8ec8Nv5SqXH34VVtDmNLfiVlsTyomBXAJkJ2/n5XqJzPLFGWWREtPLkVVS+
u426vt/hNsQi5akNoidYeXo98JcrmeApFJ3zB2KxvMziHx8LD4q1gAl9NumtX5YLbCpdWL9AkWdX
Oaro3D9zj6Q6LyGwa/UQUrZdg3BXc07hjHZn6d9vet1SzpbyqQpTzM63yXiX1meEMlMCAwEAAaOC
AZEwggGNMA4GA1UdDwEB/wQEAwIFoDBMBgNVHSAERTBDMEEGCSsGAQQBoDIBKDA0MDIGCCsGAQUF
BwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5LzAnBgNVHREEIDAegRxk
YW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUH
AwIGCCsGAQUFBwMEMEMGA1UdHwQ8MDowOKA2oDSGMmh0dHA6Ly9jcmwuZ2xvYmFsc2lnbi5jb20v
Z3MvZ3NwZXJzb25hbHNpZ24yZzIuY3JsMFUGCCsGAQUFBwEBBEkwRzBFBggrBgEFBQcwAoY5aHR0
cDovL3NlY3VyZS5nbG9iYWxzaWduLmNvbS9jYWNlcnQvZ3NwZXJzb25hbHNpZ24yZzIuY3J0MB0G
A1UdDgQWBBS8NFA/upd+Wipw2nj8RD/Ct+R2GTAfBgNVHSMEGDAWgBQ/FdJtfC/nMZ5DCgaolGws
O8XuZTANBgkqhkiG9w0BAQUFAAOCAQEAXVTpu4fhOLETAW0zdbQiIwBIMZgeVNJnWV3GsMxByycU
63P+WBQTBl9qj47vHLmVdeF7MzH0QSXZSc9Tnfr6CYIImpyIZxRAGpAsWmtZf3JieRA0+j4GQJF2
zAea1NXYXoG9+ZSSZHBSxKUdrRdVdE320nuVGTT2HjEI2LEYbOvaXyi6HhpuHUiyu4LD0+RIT3fi
T8jUiKKLTsApTD+Ak8SLF0IESOSA6htirv69mDDC7Klg9dT7QBPO7dpoKIUOldV3VhahndVfsDff
KD7pkUUvG5XftYEQOxlWDJzuTBeqf/4hxXMtzFU9OaI6oKJjLfr6B+XBc6xwOtc/NMWmejCCBMow
ggOyoAMCAQICEQCWaWbA3qWpL+Qmn6I16DynMA0GCSqGSIb3DQEBBQUAMFQxCzAJBgNVBAYTAkJF
MRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSowKAYDVQQDEyFHbG9iYWxTaWduIFBlcnNvbmFs
U2lnbiAyIENBIC0gRzIwHhcNMTMwODI3MTY1NzU4WhcNMTYwODI3MTY1NzU4WjBYMQswCQYDVQQG
EwJERTEcMBoGA1UEAxMTRGFuaWVsIEhlbGdlbmJlcmdlcjErMCkGCSqGSIb3DQEJARYcZGFuaWVs
LmhlbGdlbmJlcmdlckBtLWJveC5kZTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAM4B
Q5vPknk1OGLd1qKSUIKmQLrjccjJcYj7qtAtA+fNYKF89p1VY4UwiFcF9jKlmA9Q8o8tYSx16LYY
FoGWokNRAeKFXZiBZiHyI0ekpEfxo8N5cTMCcxKcSYWV8sqzmBPCoMNpmiVoC8ec8Nv5SqXH34VV
tDmNLfiVlsTyomBXAJkJ2/n5XqJzPLFGWWREtPLkVVS+u426vt/hNsQi5akNoidYeXo98JcrmeAp
FJ3zB2KxvMziHx8LD4q1gAl9NumtX5YLbCpdWL9AkWdXOaro3D9zj6Q6LyGwa/UQUrZdg3BXc07h
jHZn6d9vet1SzpbyqQpTzM63yXiX1meEMlMCAwEAAaOCAZEwggGNMA4GA1UdDwEB/wQEAwIFoDBM
BgNVHSAERTBDMEEGCSsGAQQBoDIBKDA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxz
aWduLmNvbS9yZXBvc2l0b3J5LzAnBgNVHREEIDAegRxkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94
LmRlMAkGA1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMEMEMGA1UdHwQ8MDow
OKA2oDSGMmh0dHA6Ly9jcmwuZ2xvYmFsc2lnbi5jb20vZ3MvZ3NwZXJzb25hbHNpZ24yZzIuY3Js
MFUGCCsGAQUFBwEBBEkwRzBFBggrBgEFBQcwAoY5aHR0cDovL3NlY3VyZS5nbG9iYWxzaWduLmNv
bS9jYWNlcnQvZ3NwZXJzb25hbHNpZ24yZzIuY3J0MB0GA1UdDgQWBBS8NFA/upd+Wipw2nj8RD/C
t+R2GTAfBgNVHSMEGDAWgBQ/FdJtfC/nMZ5DCgaolGwsO8XuZTANBgkqhkiG9w0BAQUFAAOCAQEA
XVTpu4fhOLETAW0zdbQiIwBIMZgeVNJnWV3GsMxByycU63P+WBQTBl9qj47vHLmVdeF7MzH0QSXZ
Sc9Tnfr6CYIImpyIZxRAGpAsWmtZf3JieRA0+j4GQJF2zAea1NXYXoG9+ZSSZHBSxKUdrRdVdE32
0nuVGTT2HjEI2LEYbOvaXyi6HhpuHUiyu4LD0+RIT3fiT8jUiKKLTsApTD+Ak8SLF0IESOSA6hti
rv69mDDC7Klg9dT7QBPO7dpoKIUOldV3VhahndVfsDffKD7pkUUvG5XftYEQOxlWDJzuTBeqf/4h
xXMtzFU9OaI6oKJjLfr6B+XBc6xwOtc/NMWmejGCAucwggLjAgEBMGkwVDELMAkGA1UEBhMCQkUx
GTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNVBAMTIUdsb2JhbFNpZ24gUGVyc29uYWxT
aWduIDIgQ0EgLSBHMgIRAJZpZsDepakv5CafojXoPKcwCQYFKw4DAhoFAKCCAVMwGAYJKoZIhvcN
AQkDMQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTQwNTIwMTUwOTQ0WjAjBgkqhkiG9w0B
CQQxFgQUNWjXzdu5kxQA+V8TcJ7IWENcYmAweAYJKwYBBAGCNxAEMWswaTBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMiBDQSAtIEcyAhEAlmlmwN6lqS/kJp+iNeg8pzB6BgsqhkiG9w0BCRACCzFroGkwVDEL
MAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNVBAMTIUdsb2JhbFNp
Z24gUGVyc29uYWxTaWduIDIgQ0EgLSBHMgIRAJZpZsDepakv5CafojXoPKcwDQYJKoZIhvcNAQEB
BQAEggEAFf/QaKOowfKPmwjHyepYzcL3LfhqGHS8h8t4LjH3fnIqFbUfD2ALKbq1kHFlN2HErsXD
Rvc7Gz7BrUY2Ngiv4Nqltv6825zgtxnjz7aII8j+tw8/5hiDBM6yHMQPysAiAwVpxx1/UwrIzKrL
fUBFBTTq/Y7kH0qLsdmFPb5yt1WiHNfpFAYvr9BQFqzua9to6Qjtl4yaUakXq0PZFABrzJr71wmk
wzGeHH91TRdajC3JId2xwz+ys5zmpJFTmhjBQzpvJUQentHu8R3jdAsvSUALnmkeSO81alU6+ZgB
fvEv/R9ARNfnqy2Nhelg/9J2XL4F9943ouj4cp+yE7divgAAAAAAAA==
--=-agOw3I86inzrd9sXeocZ--
10 years, 6 months
Windows 7 guest not synchronize sound on ovirt 3.5
by Felipe Herrera Martinez
Hi all,
We are experiencing a problem when accessing Windows 7 guest over virt-viewer client... although I increase/decrease the volume on Windows guest speaker... it remains always
at the same level... Seems that there is no synchronization with the Client.
Andy clue about the problem? and if it's a known one?
BRgds,
F.
10 years, 6 months
Migrate between 3.3.4 and 3.4.1
by Peter Harris
I am trying to migrate my VMs from my old host running ovirt 3.3.4 to a new
setup running 3.4.1. My basic set up is:
OLD
vmhost1 - ovirt 3.3.4 - NFS storages
NEW
ovirtmgr - ovirt 3.4.1 (virt only setup) - gluster storage domains
vmhost2 - cluster1
vmhost3 - cluster2
vmhost4 - cluster2
vmhost5 - cluster3
vmhost6 - cluster3
My gluster volumes are created via gluster command line and I have
All hosts are running scientific Linux 6.5, and the intention is to migrate
vmhost1 to new environment cluster1.
I have an NFS export storage domain which I am using to migrate VMs from
vmhost1.
Volume Name: vol-vminf
Type: Distributed-Replicate
Volume ID: b0b456bb-76e9-42e7-bb95-3415db79d631
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vmhost3:/storage/inf/br-inf
Brick2: vmhost4:/storage/inf/br-inf
Brick3: vmhost5:/storage/inf/br-inf
Brick4: vmhost6:/storage/inf/br-inf
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
server.allow-insecure: on
Volume Name: vol-vmimages
Type: Distribute
Volume ID: 91e2cf8b-2662-4c26-b937-84b8f5b62e2b
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: vmhost3:/storage/vmimages/br-vmimages
Brick2: vmhost3:/storage/vmimages/br-vmimages
Brick3: vmhost3:/storage/vmimages/br-vmimages
Brick4: vmhost3:/storage/vmimages/br-vmimages
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
server.allow-insecure: on
I have had many varying fail results which I have tried to match up with
threads here, and I am now a bit stuck and would appreciate any help.
I quite regularly cannot import vms, and I now cannot create a new disk for
a new VM (no import). The error always seems to boild down to the following
error on engine.log (the following specifically for the create of a new
disk image on a gluster storage domain):
2014-05-20 08:51:21,136 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-9) [4637af09] Correlation ID: 2b0b55ab, Job ID:
1a583643-e28a-4f09-a39d-46e4fc6d20b8, Call Stack: null, Custom Event ID:
-1, Message: Add-Disk operation of rhel-7_Disk1 was initiated on VM rhel-7
by peter.harris.
2014-05-20 08:51:21,137 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(ajp--127.0.0.1-8702-9) [4637af09] BaseAsyncTask::startPollingTask:
Starting to poll task 720b4d92-1425-478c-8351-4ff827b8f728.
2014-05-20 08:51:28,077 INFO [org.ovirt.engine.core.bll.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-19) Polling and updating Async Tasks: 1
tasks, 1 tasks to poll now
2014-05-20 08:51:28,084 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-19) Failed in HSMGetAllTasksStatusesVDS
method
2014-05-20 08:51:28,085 INFO [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-19) SPMAsyncTask::PollTask: Polling task
720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned
status finished, result 'cleanSuccess'.
2014-05-20 08:51:28,104 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-19) BaseAsyncTask::LogEndTaskFailure: Task
720b4d92-1425-478c-8351-4ff827b8f728 (Parent Command AddDisk, Parameters
Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended
with failure:^M
-- Result: cleanSuccess^M
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory:
'/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a',
code = 100,^M
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = [Errno 2] No such file or directory:
'/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/967aec77-46d5-418b-8979-d0a86389a77b/images/7726b997-7e58-45f8-a5a6-9cb9a689a45a',
code = 100
Certainly, if I check
/rhev/data-center/06930787-a091-49a3-8217-1418c5a9881e/ on the SPM server,
there is no 967aec77-46d5-418b-8979-d0a86389a77b subdirectory. The only
elements I have are NFS mounts.
There appear to be no errors in the SPM vdsm.log for this disk
=============
When I tried to import the vm (the one that I then tried to create from
scratch above), I had the following errors in the SPM vdsm log:
Thread-2220::DEBUG::2014-05-20
08:35:05,255::task::595::TaskManager.Task::(_updateState)
Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::moving from state init ->
state preparing
Thread-2220::INFO::2014-05-20
08:35:05,255::logUtils::44::dispatcher::(wrapper) Run and protect:
deleteImage(sdUUID='615647e2-1f60-47e1-8e55-be9f7ead6f15',
spUUID='06930787-a091-49a3-8217-1418c5a9881e',
imgUUID='80ed133c-fd72-4d35-aae5-e1313be3cf23', postZero='false',
force='false')
Thread-2220::DEBUG::2014-05-20
08:35:05,255::resourceManager::198::ResourceManager.Request::(__init__)
ResName=`Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23`ReqID=`499de454-c563-4156-a3ed-13b7eb9defa6`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '1496' at 'deleteImage'
Thread-2220::DEBUG::2014-05-20
08:35:05,255::resourceManager::542::ResourceManager::(registerResource)
Trying to register resource 'Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23'
for lock type 'exclusive'
Thread-2220::DEBUG::2014-05-20
08:35:05,255::resourceManager::601::ResourceManager::(registerResource)
Resource 'Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23' is free. Now
locking as 'exclusive' (1 active user)
Thread-2220::DEBUG::2014-05-20
08:35:05,256::resourceManager::238::ResourceManager.Request::(grant)
ResName=`Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23`ReqID=`499de454-c563-4156-a3ed-13b7eb9defa6`::Granted
request
Thread-2220::DEBUG::2014-05-20
08:35:05,256::task::827::TaskManager.Task::(resourceAcquired)
Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::_resourcesAcquired:
Storage.80ed133c-fd72-4d35-aae5-e1313be3cf23 (exclusive)
Thread-2220::DEBUG::2014-05-20
08:35:05,256::task::990::TaskManager.Task::(_decref)
Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::ref 1 aborting False
Thread-2220::DEBUG::2014-05-20
08:35:05,256::resourceManager::198::ResourceManager.Request::(__init__)
ResName=`Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15`ReqID=`73f79517-f13a-4e5b-999a-6f1994d2818a`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '1497' at 'deleteImage'
Thread-2220::DEBUG::2014-05-20
08:35:05,256::resourceManager::542::ResourceManager::(registerResource)
Trying to register resource 'Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15'
for lock type 'shared'
Thread-2220::DEBUG::2014-05-20
08:35:05,257::resourceManager::601::ResourceManager::(registerResource)
Resource 'Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15' is free. Now
locking as 'shared' (1 active user)
Thread-2220::DEBUG::2014-05-20
08:35:05,257::resourceManager::238::ResourceManager.Request::(grant)
ResName=`Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15`ReqID=`73f79517-f13a-4e5b-999a-6f1994d2818a`::Granted
request
Thread-2220::DEBUG::2014-05-20
08:35:05,257::task::827::TaskManager.Task::(resourceAcquired)
Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::_resourcesAcquired:
Storage.615647e2-1f60-47e1-8e55-be9f7ead6f15 (shared)
Thread-2220::DEBUG::2014-05-20
08:35:05,257::task::990::TaskManager.Task::(_decref)
Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::ref 1 aborting False
Thread-2220::ERROR::2014-05-20
08:35:05,266::hsm::1502::Storage.HSM::(deleteImage) Empty or not found
image 80ed133c-fd72-4d35-aae5-e1313be3cf23 in SD
615647e2-1f60-47e1-8e55-be9f7ead6f15.
{'1f41529a-e02e-4cd8-987c-b1ea4fcba2be':
ImgsPar(imgs=('290f5cdf-b5d7-462b-958d-d41458a26bf6',), parent=None),
'1748a8f0-8668-4f21-9b26-d2e3b180e35b':
ImgsPar(imgs=('67fd552b-8b3d-4117-82d2-e801bb600992',), parent=None)}
Thread-2220::ERROR::2014-05-20
08:35:05,266::task::866::TaskManager.Task::(_setError)
Task=`15bc07b5-201f-4bba-bf5f-f79eb92c6a61`::Unexpected error
When I installed/setup ovirt-engine, I did chose NFS as the file system.
I am clearly doing something weird
10 years, 6 months
Call for Papers Deadline in 1 week: OSCON Open Cloud Day
by Brian Proffitt
Conference: OSCON Open Cloud Day
Information: The Open Cloud Day at OSCON is a look at the “state of cloud” in 2014, gathering industry practitioners to give their take on the state of public and private cloud, IaaS, and PaaS platforms, and where the industry is going in 2014 and beyond.
Date: July 21, 2014
Location: Portland, OR
Call for Papers Deadline: May 27, 2014
Call for Papers URL: http://www.oscon.com/oscon2014/public/cfp/337
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 6 months