[Users] oVirt Weekly Meeting Minutes -- 2013-06-17

Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html ============================ #ovirt: oVirt Weekly Meeting ============================ Meeting started by mburns at 14:00:11 UTC. The full logs are available at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html . Meeting summary --------------- * agenda and roll call (mburns, 14:00:23) * 3.3 status update (mburns, 14:00:30) * Workshops and Conferences (mburns, 14:00:47) * infra update (mburns, 14:00:50) * Other Topics (mburns, 14:00:53) * 3.3 status update (mburns, 14:03:07) * otopi ovirt-host-deploy ovirt-image-uploader ovirt-iso-uploader ovirt-log-collector all posted (mburns, 14:04:37) * ovirt-node base images available, will produce vdsm images once vdsm is available (mburns, 14:04:55) * vdsm rc1 does not include glance (mburns, 14:13:21) * respin to happen to include glance changes (mburns, 14:13:30) * ACTION: fsimonce to rebuild vdsm rc2 to include glance (mburns, 14:13:47) * ovirt scheduler (engine impact only) to merge today/tomorrow (mburns, 14:14:17) * scheduling api need 1.5-2 weeks (not a beta blocker) (mburns, 14:14:34) * cloud-init waiting on maintainer acceptance ( mpastern ) (mburns, 14:16:27) * neutron is merged (mburns, 14:18:34) * guest-agent packages are pending, not blockers for beta (mburns, 14:22:28) * ovirt-engine build to happen after scheduler code gets in (mburns, 14:26:05) * reports was delivered to mgoldboi, mburns to get it and upload to ovirt.org (mburns, 14:26:24) * plan going forward (mburns, 14:47:18) * merge scheduling stuff today/tomorrow (mburns, 14:47:30) * build new vdsm and engine (mburns, 14:47:37) * post everything up after build tomorrow (mburns, 14:47:45) * send announcement that the beta is available with a disclaimer that many features are not yet available in rest api (mburns, 14:48:23) * conferences and workshops (mburns, 14:49:05) * KVM Forum/LinuxCon EU/CloudOpen EU CFP closes on 21-July (mburns, 14:49:39) * KVM Forum/LinuxCon EU/CloudOpen EU CFP takes place on 21-23 October (mburns, 14:50:03) * planning on a oVirt Developers meet-up during this timeframe (mburns, 14:51:36) * Infra update (mburns, 14:51:57) * there was no infra meeting this week due a lack of people (ewoud, 14:53:29) * fedora 19 slaves on rackspace servers were added and more and more of their configuration is being puppeted (ewoud, 14:54:23) * infra team looking for a new project coordinator, more details next week (mburns, 14:56:01) * 3.3 readiness (revisited) (mburns, 14:56:22) * new bug found in engine -- "New Host" does not work. (mburns, 14:56:37) * trivial fix (mburns, 14:56:40) * oschreib to wait on engine build until the fix is merged (mburns, 14:56:49) * other topics (mburns, 14:56:55) Meeting ended at 14:59:17 UTC. Action Items ------------ * fsimonce to rebuild vdsm rc2 to include glance Action Items, by person ----------------------- * fsimonce * fsimonce to rebuild vdsm rc2 to include glance * **UNASSIGNED** * (none) People Present (lines said) --------------------------- * mburns (116) * danken (26) * mskrivanek (21) * abonas (9) * oschreib (9) * doron (8) * ofri (8) * lvernia (6) * ewoud (4) * fsimonce (4) * hetz (4) * mpastern (4) * ovirtbot (3) * sgotliv (2) * dustins_ntap (1) * ydary (1) * ecohen (1) * sahina (1) * ofrenkel (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot

----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: board@ovirt.org, "users" <users@ovirt.org> | Sent: Wednesday, July 17, 2013 6:00:01 PM | Subject: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | Minutes: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | Minutes (text): | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | Log: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | | ============================ | #ovirt: oVirt Weekly Meeting | ============================ | | | Meeting started by mburns at 14:00:11 UTC. The full logs are available | at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | . | | | | Meeting summary | --------------- | * agenda and roll call (mburns, 14:00:23) | * 3.3 status update (mburns, 14:00:30) | * Workshops and Conferences (mburns, 14:00:47) | * infra update (mburns, 14:00:50) | * Other Topics (mburns, 14:00:53) | | * 3.3 status update (mburns, 14:03:07) | * otopi ovirt-host-deploy ovirt-image-uploader ovirt-iso-uploader | ovirt-log-collector all posted (mburns, 14:04:37) | * ovirt-node base images available, will produce vdsm images once vdsm | is available (mburns, 14:04:55) Please be sure to pickup the latest mom (0.3.2.1 or later, which should be built today). This mom version includs vital ballooning support VDSM now needs. | * vdsm rc1 does not include glance (mburns, 14:13:21) | * respin to happen to include glance changes (mburns, 14:13:30) | * ACTION: fsimonce to rebuild vdsm rc2 to include glance (mburns, | 14:13:47) | * ovirt scheduler (engine impact only) to merge today/tomorrow | (mburns, 14:14:17) | * scheduling api need 1.5-2 weeks (not a beta blocker) (mburns, | 14:14:34) | * cloud-init waiting on maintainer acceptance ( mpastern ) (mburns, | 14:16:27) | * neutron is merged (mburns, 14:18:34) | * guest-agent packages are pending, not blockers for beta (mburns, | 14:22:28) | * ovirt-engine build to happen after scheduler code gets in (mburns, | 14:26:05) | * reports was delivered to mgoldboi, mburns to get it and upload to | ovirt.org (mburns, 14:26:24) | * plan going forward (mburns, 14:47:18) | * merge scheduling stuff today/tomorrow (mburns, 14:47:30) | * build new vdsm and engine (mburns, 14:47:37) | * post everything up after build tomorrow (mburns, 14:47:45) | * send announcement that the beta is available with a disclaimer that | many features are not yet available in rest api (mburns, 14:48:23) | | * conferences and workshops (mburns, 14:49:05) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP closes on 21-July (mburns, | 14:49:39) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP takes place on 21-23 October | (mburns, 14:50:03) | * planning on a oVirt Developers meet-up during this timeframe | (mburns, 14:51:36) | | * Infra update (mburns, 14:51:57) | * there was no infra meeting this week due a lack of people (ewoud, | 14:53:29) | * fedora 19 slaves on rackspace servers were added and more and more | of their configuration is being puppeted (ewoud, 14:54:23) | * infra team looking for a new project coordinator, more details next | week (mburns, 14:56:01) | | * 3.3 readiness (revisited) (mburns, 14:56:22) | * new bug found in engine -- "New Host" does not work. (mburns, | 14:56:37) | * trivial fix (mburns, 14:56:40) | * oschreib to wait on engine build until the fix is merged (mburns, | 14:56:49) | | * other topics (mburns, 14:56:55) | | Meeting ended at 14:59:17 UTC. | | | | | Action Items | ------------ | * fsimonce to rebuild vdsm rc2 to include glance | | | | | Action Items, by person | ----------------------- | * fsimonce | * fsimonce to rebuild vdsm rc2 to include glance | * **UNASSIGNED** | * (none) | | | | | People Present (lines said) | --------------------------- | * mburns (116) | * danken (26) | * mskrivanek (21) | * abonas (9) | * oschreib (9) | * doron (8) | * ofri (8) | * lvernia (6) | * ewoud (4) | * fsimonce (4) | * hetz (4) | * mpastern (4) | * ovirtbot (3) | * sgotliv (2) | * dustins_ntap (1) | * ydary (1) | * ecohen (1) | * sahina (1) | * ofrenkel (1) | | | | | Generated by `MeetBot`_ 0.1.4 | | .. _`MeetBot`: http://wiki.debian.org/MeetBot | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users |

--_000_9272B85912E97F4486906561F9257C5701675FF9AUSP01DAG0307co_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hey Guys, I am running oVirt 3.1; and I am getting the following in dmesg, and I have= pasted my VG info below. We are running iSCSI luns and this is a node com= plaining in the log. Our equalogic storage was close to full; and this sta= rted happening. I have extended it, and it is thin provisioned; but I am n= ot sure if a pvresize is needed. If so, can it be done while online? Tryi= ng not to take down this cluster; several heavily used VMs on it. The VMs = seem to be running ok for now; but this is filling up the logs pretty quick= ly; as it appears to be retrying to connections and to release the resource= s. All help is appreciated. Thanks for all your help guys; and feel free to email me directly if I can = provide any more info. Log Snippet: Thread-550892::DEBUG::2013-07-17 19:06:14,335::resourceManager::565::Resour= ceManager::(releaseResource) No one is waiting for resource 'Storage.69d66c= f5-9426-4385-be6c-c55e2bd40617', Clearing records. Thread-550892::DEBUG::2013-07-17 19:06:14,335::task::978::TaskManager.Task:= :(_decref) Task=3D`03810e67-5dab-4193-8cbd-c633c8c25fa4`::ref 0 aborting Fa= lse Thread-550892::DEBUG::2013-07-17 19:06:14,336::task::588::TaskManager.Task:= :(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving from = state init -> state preparing Thread-550892::INFO::2013-07-17 19:06:14,336::logUtils::37::dispatcher::(wr= apper) Run and protect: getVolumeSize(sdUUID=3D'7cf8a585-34a8-470c-9b40-9d4= 86456fcea', spUUID=3D'2ecfa6dd-a1fa-428c-9d04-db17c6c4491d', imgUUID=3D'8cb= c183a-0afe-497a-a54b-ec5f03b7fe16', volUUID=3D'bcb8d500-5aa4-404a-9be2-9fb8= 4bc87300', options=3DNone) Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::175::Resour= ceManager.Request::(__init__) ResName=3D`Storage.7cf8a585-34a8-470c-9b40-9d= 486456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Request was made= in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerRes= ource' Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::486::Resour= ceManager::(registerResource) Trying to register resource 'Storage.7cf8a585= -34a8-470c-9b40-9d486456fcea' for lock type 'shared' Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::528::Resour= ceManager::(registerResource) Resource 'Storage.7cf8a585-34a8-470c-9b40-9d4= 86456fcea' is free. Now locking as 'shared' (1 active user) Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::212::Resour= ceManager.Request::(grant) ResName=3D`Storage.7cf8a585-34a8-470c-9b40-9d486= 456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Granted request Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::817::TaskManager.Task:= :(resourceAcquired) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::_resourc= esAcquired: Storage.7cf8a585-34a8-470c-9b40-9d486456fcea (shared) Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::978::TaskManager.Task:= :(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 1 aborting Fa= lse Thread-550892::INFO::2013-07-17 19:06:14,338::logUtils::39::dispatcher::(wr= apper) Run and protect: getVolumeSize, Return response: {'truesize': '31138= 512896', 'apparentsize': '31138512896'} Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::1172::TaskManager.Task= ::(prepare) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::finished: {'true= size': '31138512896', 'apparentsize': '31138512896'} Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::588::TaskManager.Task:= :(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving from = state preparing -> state finished Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::809::Resour= ceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Stor= age.7cf8a585-34a8-470c-9b40-9d486456fcea': < ResourceRef 'Storage.7cf8a585-= 34a8-470c-9b40-9d486456fcea', isValid: 'True' obj: 'None'>} Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::844::Resour= ceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::538::Resour= ceManager::(releaseResource) Trying to release resource 'Storage.7cf8a585-3= 4a8-470c-9b40-9d486456fcea' Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::553::Resour= ceManager::(releaseResource) Released resource 'Storage.7cf8a585-34a8-470c-= 9b40-9d486456fcea' (0 active users) Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::558::Resour= ceManager::(releaseResource) Resource 'Storage.7cf8a585-34a8-470c-9b40-9d48= 6456fcea' is free, finding out if anyone is waiting for it. Thread-550892::DEBUG::2013-07-17 19:06:14,339::resourceManager::565::Resour= ceManager::(releaseResource) No one is waiting for resource 'Storage.7cf8a5= 85-34a8-470c-9b40-9d486456fcea', Clearing records. Thread-550892::DEBUG::2013-07-17 19:06:14,339::task::978::TaskManager.Task:= :(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 0 aborting Fa= lse Dmesg Snippet: __ratelimit: 28 callbacks suppressed end_request: I/O error, dev dm-64, sector 5769088 end_request: I/O error, dev dm-64, sector 5769200 end_request: I/O error, dev dm-64, sector 5507072 end_request: I/O error, dev dm-64, sector 5507080 end_request: I/O error, dev dm-64, sector 5507072 end_request: I/O error, dev dm-64, sector 5506944 end_request: I/O error, dev dm-64, sector 5507056 end_request: I/O error, dev dm-64, sector 1312768 end_request: I/O error, dev dm-64, sector 1312776 end_request: I/O error, dev dm-64, sector 1312768 end_request: I/O error, dev dm-64, sector 6031232 end_request: I/O error, dev dm-64, sector 6031344 end_request: I/O error, dev dm-64, sector 5769216 end_request: I/O error, dev dm-64, sector 5769224 end_request: I/O error, dev dm-64, sector 5769216 end_request: I/O error, dev dm-64, sector 6293376 end_request: I/O error, dev dm-64, sector 6293488 end_request: I/O error, dev dm-64, sector 6031360 end_request: I/O error, dev dm-64, sector 6031368 end_request: I/O error, dev dm-64, sector 6031360 end_request: I/O error, dev dm-64, sector 8390528 end_request: I/O error, dev dm-64, sector 8390640 end_request: I/O error, dev dm-64, sector 6293504 end_request: I/O error, dev dm-64, sector 6293512 end_request: I/O error, dev dm-64, sector 6293504 __ratelimit: 25 callbacks suppressed Thanks, Matt Curry Matt@MattCurry.com ________________________________ This is a PRIVATE message. If you are not the intended recipient, please de= lete without copying and kindly advise us by e-mail of the mistake in deliv= ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO= POS to any order or other contract unless pursuant to explicit written agre= ement or government initiative expressly permitting the use of e-mail for s= uch purpose. --_000_9272B85912E97F4486906561F9257C5701675FF9AUSP01DAG0307co_ Content-Type: text/html; charset="us-ascii" Content-ID: <042E26E9FED4D948ACA4DC2C3F04AFB6@collaborationhost.net> Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
</head> <body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font= -family:'Courier New',sans-serif"> <div>Hey Guys,</div> <div><br> </div> <div>I am running oVirt 3.1; and I am getting the following in dmesg, and I= have pasted my VG info below. We are running iSCSI luns and this is = a node complaining in the log. Our equalogic storage was close to ful= l; and this started happening. I have extended it, and it is thin provisioned; but I am not sure if a pvresize is needed.= If so, can it be done while online? Trying not to take down th= is cluster; several heavily used VMs on it. The VMs seem to be runnin= g ok for now; but this is filling up the logs pretty quickly; as it appears to be retrying to connections and to release the re= sources. <b>All help is appreciated</b>.</div> <div><br> </div> <div>Thanks for all your help guys; and feel free to email me directly if I= can provide any more info.</div> <div><br> </div> <div><b>Log Snippet:</b></div> <div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,335::resourceManager::565::R= esourceManager::(releaseResource) No one is waiting for resource 'Storage.6= 9d66cf5-9426-4385-be6c-c55e2bd40617', Clearing records.</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,335::task::978::TaskManager.= Task::(_decref) Task=3D`03810e67-5dab-4193-8cbd-c633c8c25fa4`::ref 0 aborti= ng False</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::task::588::TaskManager.= Task::(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving = from state init -> state preparing</div> <div>Thread-550892::INFO::2013-07-17 19:06:14,336::logUtils::37::dispatcher= ::(wrapper) Run and protect: getVolumeSize(sdUUID=3D'7cf8a585-34a8-470c-9b4= 0-9d486456fcea', spUUID=3D'2ecfa6dd-a1fa-428c-9d04-db17c6c4491d', imgUUID= =3D'8cbc183a-0afe-497a-a54b-ec5f03b7fe16', volUUID=3D'bcb8d500-5aa4-404a-9be2-9fb84bc87300', options=3DNone)</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::175::R= esourceManager.Request::(__init__) ResName=3D`Storage.7cf8a585-34a8-470c-9b= 40-9d486456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Request was= made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::486::R= esourceManager::(registerResource) Trying to register resource 'Storage.7cf= 8a585-34a8-470c-9b40-9d486456fcea' for lock type 'shared'</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::528::R= esourceManager::(registerResource) Resource 'Storage.7cf8a585-34a8-470c-9b4= 0-9d486456fcea' is free. Now locking as 'shared' (1 active user)</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::212::R= esourceManager.Request::(grant) ResName=3D`Storage.7cf8a585-34a8-470c-9b40-= 9d486456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Granted reques= t</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::817::TaskManager.= Task::(resourceAcquired) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::_re= sourcesAcquired: Storage.7cf8a585-34a8-470c-9b40-9d486456fcea (shared)</div=
<div>Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::978::TaskManager.= Task::(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 1 aborti= ng False</div> <div>Thread-550892::INFO::2013-07-17 19:06:14,338::logUtils::39::dispatcher= ::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '= 31138512896', 'apparentsize': '31138512896'}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::1172::TaskManager= .Task::(prepare) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::finished: {= 'truesize': '31138512896', 'apparentsize': '31138512896'}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::588::TaskManager.= Task::(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving = from state preparing -> state finished</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::809::R= esourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {= 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea': < ResourceRef 'Storage.7= cf8a585-34a8-470c-9b40-9d486456fcea', isValid: 'True' obj: 'None'>}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::844::R= esourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::538::R= esourceManager::(releaseResource) Trying to release resource 'Storage.7cf8a= 585-34a8-470c-9b40-9d486456fcea'</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::553::R= esourceManager::(releaseResource) Released resource 'Storage.7cf8a585-34a8-= 470c-9b40-9d486456fcea' (0 active users)</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::558::R= esourceManager::(releaseResource) Resource 'Storage.7cf8a585-34a8-470c-9b40= -9d486456fcea' is free, finding out if anyone is waiting for it.</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,339::resourceManager::565::R= esourceManager::(releaseResource) No one is waiting for resource 'Storage.7= cf8a585-34a8-470c-9b40-9d486456fcea', Clearing records.</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,339::task::978::TaskManager.= Task::(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 0 aborti= ng False</div> </div> <div><br> </div> <div><br> </div> <div><b><br> </b></div> <div><b>Dmesg Snippet:</b></div> <div> <div>__ratelimit: 28 callbacks suppressed</div> <div>end_request: I/O error, dev dm-64, sector 5769088</div> <div>end_request: I/O error, dev dm-64, sector 5769200</div> <div>end_request: I/O error, dev dm-64, sector 5507072</div> <div>end_request: I/O error, dev dm-64, sector 5507080</div> <div>end_request: I/O error, dev dm-64, sector 5507072</div> <div>end_request: I/O error, dev dm-64, sector 5506944</div> <div>end_request: I/O error, dev dm-64, sector 5507056</div> <div>end_request: I/O error, dev dm-64, sector 1312768</div> <div>end_request: I/O error, dev dm-64, sector 1312776</div> <div>end_request: I/O error, dev dm-64, sector 1312768</div> <div>end_request: I/O error, dev dm-64, sector 6031232</div> <div>end_request: I/O error, dev dm-64, sector 6031344</div> <div>end_request: I/O error, dev dm-64, sector 5769216</div> <div>end_request: I/O error, dev dm-64, sector 5769224</div> <div>end_request: I/O error, dev dm-64, sector 5769216</div> <div>end_request: I/O error, dev dm-64, sector 6293376</div> <div>end_request: I/O error, dev dm-64, sector 6293488</div> <div>end_request: I/O error, dev dm-64, sector 6031360</div> <div>end_request: I/O error, dev dm-64, sector 6031368</div> <div>end_request: I/O error, dev dm-64, sector 6031360</div> <div>end_request: I/O error, dev dm-64, sector 8390528</div> <div>end_request: I/O error, dev dm-64, sector 8390640</div> <div>end_request: I/O error, dev dm-64, sector 6293504</div> <div>end_request: I/O error, dev dm-64, sector 6293512</div> <div>end_request: I/O error, dev dm-64, sector 6293504</div> <div>__ratelimit: 25 callbacks suppressed</div> </div> <div><br> </div> <div><br> </div> <div>Thanks,</div> <div>Matt Curry</div> <div>Matt@MattCurry.com</div> <div><br> </div> <br> <br> <hr> <font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I= f you are not the intended recipient, please delete without copying and kin= dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con= tent, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit wri= tten agreement or government initiative expressly permitting the use of e-m= ail for such purpose.</font> </body> </html> --_000_9272B85912E97F4486906561F9257C5701675FF9AUSP01DAG0307co_--

--_000_9272B85912E97F4486906561F9257C570167600AAUSP01DAG0307co_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Forgot one thing, the paste bin link! http://pastebin.com/3t7E41JK Date: Wednesday, July 17, 2013 7:08 PM To: users <users@ovirt.org<mailto:users@ovirt.org>> Subject: HELP, ISCSI Freakin! Hey Guys, I am running oVirt 3.1; and I am getting the following in dmesg, and I have= pasted my VG info below. We are running iSCSI luns and this is a node com= plaining in the log. Our equalogic storage was close to full; and this sta= rted happening. I have extended it, and it is thin provisioned; but I am n= ot sure if a pvresize is needed. If so, can it be done while online? Tryi= ng not to take down this cluster; several heavily used VMs on it. The VMs = seem to be running ok for now; but this is filling up the logs pretty quick= ly; as it appears to be retrying to connections and to release the resource= s. All help is appreciated. Thanks for all your help guys; and feel free to email me directly if I can = provide any more info. Log Snippet: Thread-550892::DEBUG::2013-07-17 19:06:14,335::resourceManager::565::Resour= ceManager::(releaseResource) No one is waiting for resource 'Storage.69d66c= f5-9426-4385-be6c-c55e2bd40617', Clearing records. Thread-550892::DEBUG::2013-07-17 19:06:14,335::task::978::TaskManager.Task:= :(_decref) Task=3D`03810e67-5dab-4193-8cbd-c633c8c25fa4`::ref 0 aborting Fa= lse Thread-550892::DEBUG::2013-07-17 19:06:14,336::task::588::TaskManager.Task:= :(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving from = state init -> state preparing Thread-550892::INFO::2013-07-17 19:06:14,336::logUtils::37::dispatcher::(wr= apper) Run and protect: getVolumeSize(sdUUID=3D'7cf8a585-34a8-470c-9b40-9d4= 86456fcea', spUUID=3D'2ecfa6dd-a1fa-428c-9d04-db17c6c4491d', imgUUID=3D'8cb= c183a-0afe-497a-a54b-ec5f03b7fe16', volUUID=3D'bcb8d500-5aa4-404a-9be2-9fb8= 4bc87300', options=3DNone) Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::175::Resour= ceManager.Request::(__init__) ResName=3D`Storage.7cf8a585-34a8-470c-9b40-9d= 486456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Request was made= in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerRes= ource' Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::486::Resour= ceManager::(registerResource) Trying to register resource 'Storage.7cf8a585= -34a8-470c-9b40-9d486456fcea' for lock type 'shared' Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::528::Resour= ceManager::(registerResource) Resource 'Storage.7cf8a585-34a8-470c-9b40-9d4= 86456fcea' is free. Now locking as 'shared' (1 active user) Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::212::Resour= ceManager.Request::(grant) ResName=3D`Storage.7cf8a585-34a8-470c-9b40-9d486= 456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Granted request Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::817::TaskManager.Task:= :(resourceAcquired) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::_resourc= esAcquired: Storage.7cf8a585-34a8-470c-9b40-9d486456fcea (shared) Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::978::TaskManager.Task:= :(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 1 aborting Fa= lse Thread-550892::INFO::2013-07-17 19:06:14,338::logUtils::39::dispatcher::(wr= apper) Run and protect: getVolumeSize, Return response: {'truesize': '31138= 512896', 'apparentsize': '31138512896'} Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::1172::TaskManager.Task= ::(prepare) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::finished: {'true= size': '31138512896', 'apparentsize': '31138512896'} Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::588::TaskManager.Task:= :(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving from = state preparing -> state finished Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::809::Resour= ceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Stor= age.7cf8a585-34a8-470c-9b40-9d486456fcea': < ResourceRef 'Storage.7cf8a585-= 34a8-470c-9b40-9d486456fcea', isValid: 'True' obj: 'None'>} Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::844::Resour= ceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::538::Resour= ceManager::(releaseResource) Trying to release resource 'Storage.7cf8a585-3= 4a8-470c-9b40-9d486456fcea' Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::553::Resour= ceManager::(releaseResource) Released resource 'Storage.7cf8a585-34a8-470c-= 9b40-9d486456fcea' (0 active users) Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::558::Resour= ceManager::(releaseResource) Resource 'Storage.7cf8a585-34a8-470c-9b40-9d48= 6456fcea' is free, finding out if anyone is waiting for it. Thread-550892::DEBUG::2013-07-17 19:06:14,339::resourceManager::565::Resour= ceManager::(releaseResource) No one is waiting for resource 'Storage.7cf8a5= 85-34a8-470c-9b40-9d486456fcea', Clearing records. Thread-550892::DEBUG::2013-07-17 19:06:14,339::task::978::TaskManager.Task:= :(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 0 aborting Fa= lse Dmesg Snippet: __ratelimit: 28 callbacks suppressed end_request: I/O error, dev dm-64, sector 5769088 end_request: I/O error, dev dm-64, sector 5769200 end_request: I/O error, dev dm-64, sector 5507072 end_request: I/O error, dev dm-64, sector 5507080 end_request: I/O error, dev dm-64, sector 5507072 end_request: I/O error, dev dm-64, sector 5506944 end_request: I/O error, dev dm-64, sector 5507056 end_request: I/O error, dev dm-64, sector 1312768 end_request: I/O error, dev dm-64, sector 1312776 end_request: I/O error, dev dm-64, sector 1312768 end_request: I/O error, dev dm-64, sector 6031232 end_request: I/O error, dev dm-64, sector 6031344 end_request: I/O error, dev dm-64, sector 5769216 end_request: I/O error, dev dm-64, sector 5769224 end_request: I/O error, dev dm-64, sector 5769216 end_request: I/O error, dev dm-64, sector 6293376 end_request: I/O error, dev dm-64, sector 6293488 end_request: I/O error, dev dm-64, sector 6031360 end_request: I/O error, dev dm-64, sector 6031368 end_request: I/O error, dev dm-64, sector 6031360 end_request: I/O error, dev dm-64, sector 8390528 end_request: I/O error, dev dm-64, sector 8390640 end_request: I/O error, dev dm-64, sector 6293504 end_request: I/O error, dev dm-64, sector 6293512 end_request: I/O error, dev dm-64, sector 6293504 __ratelimit: 25 callbacks suppressed Thanks, Matt Curry Matt@MattCurry.com<mailto:Matt@MattCurry.com> ________________________________ This is a PRIVATE message. If you are not the intended recipient, please de= lete without copying and kindly advise us by e-mail of the mistake in deliv= ery. NOTE: Regardless of content, this e-mail shall not operate to bind SKO= POS to any order or other contract unless pursuant to explicit written agre= ement or government initiative expressly permitting the use of e-mail for s= uch purpose. --_000_9272B85912E97F4486906561F9257C570167600AAUSP01DAG0307co_ Content-Type: text/html; charset="us-ascii" Content-ID: <2D58FAEDAE23E44BB4A9811BDA48B79A@collaborationhost.net> Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
</head> <body style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font= -family:'Courier New',sans-serif"> <div> <div> <div>Forgot one thing, the paste bin link!</div> </div> </div> <div><br> </div> <div><a href=3D"http://pastebin.com/3t7E41JK">http://pastebin.com/3t7E41JK<= /a></div> <div><span style=3D"font-family:Calibri; font-size:11pt; text-align:left; f= ont-weight:bold"><br> </span></div> <div><br> </div> <span id=3D"OLK_SRC_BODY_SECTION"> <div style=3D"font-family:Calibri; font-size:11pt; text-align:left; color:b= lack; border-bottom:medium none; border-left:medium none; padding-bottom:0i= n; padding-left:0in; padding-right:0in; border-top:#b5c4df 1pt solid; borde= r-right:medium none; padding-top:3pt"> <span style=3D"font-weight:bold">Date: </span>Wednesday, July 17, 2013 7:08= PM<br> <span style=3D"font-weight:bold">To: </span>users <<a href=3D"mailto:use= rs@ovirt.org">users@ovirt.org</a>><br> <span style=3D"font-weight:bold">Subject: </span>HELP, ISCSI Freakin!<br> </div> <div><br> </div> <div> <div style=3D"word-wrap:break-word; color:rgb(0,0,0); font-size:14px; font-= family:'Courier New',sans-serif"> <div>Hey Guys,</div> <div><br> </div> <div>I am running oVirt 3.1; and I am getting the following in dmesg, and I= have pasted my VG info below. We are running iSCSI luns and this is = a node complaining in the log. Our equalogic storage was close to ful= l; and this started happening. I have extended it, and it is thin provisioned; but I am not sure if a pvresize is needed.= If so, can it be done while online? Trying not to take down th= is cluster; several heavily used VMs on it. The VMs seem to be runnin= g ok for now; but this is filling up the logs pretty quickly; as it appears to be retrying to connections and to release the re= sources. <b>All help is appreciated</b>.</div> <div><br> </div> <div>Thanks for all your help guys; and feel free to email me directly if I= can provide any more info.</div> <div><br> </div> <div><b>Log Snippet:</b></div> <div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,335::resourceManager::565::R= esourceManager::(releaseResource) No one is waiting for resource 'Storage.6= 9d66cf5-9426-4385-be6c-c55e2bd40617', Clearing records.</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,335::task::978::TaskManager.= Task::(_decref) Task=3D`03810e67-5dab-4193-8cbd-c633c8c25fa4`::ref 0 aborti= ng False</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::task::588::TaskManager.= Task::(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving = from state init -> state preparing</div> <div>Thread-550892::INFO::2013-07-17 19:06:14,336::logUtils::37::dispatcher= ::(wrapper) Run and protect: getVolumeSize(sdUUID=3D'7cf8a585-34a8-470c-9b4= 0-9d486456fcea', spUUID=3D'2ecfa6dd-a1fa-428c-9d04-db17c6c4491d', imgUUID= =3D'8cbc183a-0afe-497a-a54b-ec5f03b7fe16', volUUID=3D'bcb8d500-5aa4-404a-9be2-9fb84bc87300', options=3DNone)</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::175::R= esourceManager.Request::(__init__) ResName=3D`Storage.7cf8a585-34a8-470c-9b= 40-9d486456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Request was= made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::486::R= esourceManager::(registerResource) Trying to register resource 'Storage.7cf= 8a585-34a8-470c-9b40-9d486456fcea' for lock type 'shared'</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::528::R= esourceManager::(registerResource) Resource 'Storage.7cf8a585-34a8-470c-9b4= 0-9d486456fcea' is free. Now locking as 'shared' (1 active user)</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::212::R= esourceManager.Request::(grant) ResName=3D`Storage.7cf8a585-34a8-470c-9b40-= 9d486456fcea`ReqID=3D`7411edba-20b8-4855-8271-b6aa52a9203a`::Granted reques= t</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::817::TaskManager.= Task::(resourceAcquired) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::_re= sourcesAcquired: Storage.7cf8a585-34a8-470c-9b40-9d486456fcea (shared)</div=
<div>Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::978::TaskManager.= Task::(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 1 aborti= ng False</div> <div>Thread-550892::INFO::2013-07-17 19:06:14,338::logUtils::39::dispatcher= ::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '= 31138512896', 'apparentsize': '31138512896'}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::1172::TaskManager= .Task::(prepare) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::finished: {= 'truesize': '31138512896', 'apparentsize': '31138512896'}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::588::TaskManager.= Task::(_updateState) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::moving = from state preparing -> state finished</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::809::R= esourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {= 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea': < ResourceRef 'Storage.7= cf8a585-34a8-470c-9b40-9d486456fcea', isValid: 'True' obj: 'None'>}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::844::R= esourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::538::R= esourceManager::(releaseResource) Trying to release resource 'Storage.7cf8a= 585-34a8-470c-9b40-9d486456fcea'</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::553::R= esourceManager::(releaseResource) Released resource 'Storage.7cf8a585-34a8-= 470c-9b40-9d486456fcea' (0 active users)</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::558::R= esourceManager::(releaseResource) Resource 'Storage.7cf8a585-34a8-470c-9b40= -9d486456fcea' is free, finding out if anyone is waiting for it.</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,339::resourceManager::565::R= esourceManager::(releaseResource) No one is waiting for resource 'Storage.7= cf8a585-34a8-470c-9b40-9d486456fcea', Clearing records.</div> <div>Thread-550892::DEBUG::2013-07-17 19:06:14,339::task::978::TaskManager.= Task::(_decref) Task=3D`7379fc72-5137-4432-861c-85b1f33ba843`::ref 0 aborti= ng False</div> </div> <div><br> </div> <div><br> </div> <div><b><br> </b></div> <div><b>Dmesg Snippet:</b></div> <div> <div>__ratelimit: 28 callbacks suppressed</div> <div>end_request: I/O error, dev dm-64, sector 5769088</div> <div>end_request: I/O error, dev dm-64, sector 5769200</div> <div>end_request: I/O error, dev dm-64, sector 5507072</div> <div>end_request: I/O error, dev dm-64, sector 5507080</div> <div>end_request: I/O error, dev dm-64, sector 5507072</div> <div>end_request: I/O error, dev dm-64, sector 5506944</div> <div>end_request: I/O error, dev dm-64, sector 5507056</div> <div>end_request: I/O error, dev dm-64, sector 1312768</div> <div>end_request: I/O error, dev dm-64, sector 1312776</div> <div>end_request: I/O error, dev dm-64, sector 1312768</div> <div>end_request: I/O error, dev dm-64, sector 6031232</div> <div>end_request: I/O error, dev dm-64, sector 6031344</div> <div>end_request: I/O error, dev dm-64, sector 5769216</div> <div>end_request: I/O error, dev dm-64, sector 5769224</div> <div>end_request: I/O error, dev dm-64, sector 5769216</div> <div>end_request: I/O error, dev dm-64, sector 6293376</div> <div>end_request: I/O error, dev dm-64, sector 6293488</div> <div>end_request: I/O error, dev dm-64, sector 6031360</div> <div>end_request: I/O error, dev dm-64, sector 6031368</div> <div>end_request: I/O error, dev dm-64, sector 6031360</div> <div>end_request: I/O error, dev dm-64, sector 8390528</div> <div>end_request: I/O error, dev dm-64, sector 8390640</div> <div>end_request: I/O error, dev dm-64, sector 6293504</div> <div>end_request: I/O error, dev dm-64, sector 6293512</div> <div>end_request: I/O error, dev dm-64, sector 6293504</div> <div>__ratelimit: 25 callbacks suppressed</div> </div> <div><br> </div> <div><br> </div> <div>Thanks,</div> <div>Matt Curry</div> <div><a href=3D"mailto:Matt@MattCurry.com">Matt@MattCurry.com</a></div> <div><br> </div> <br> </div> </div> </span><br> <hr> <font color=3D"Gray" face=3D"Arial" size=3D"1">This is a PRIVATE message. I= f you are not the intended recipient, please delete without copying and kin= dly advise us by e-mail of the mistake in delivery. NOTE: Regardless of con= tent, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit wri= tten agreement or government initiative expressly permitting the use of e-m= ail for such purpose.</font> </body> </html> --_000_9272B85912E97F4486906561F9257C570167600AAUSP01DAG0307co_--

----- Original Message -----
Forgot one thing, the paste bin link!
Date: Wednesday, July 17, 2013 7:08 PM To: users < users@ovirt.org > Subject: HELP, ISCSI Freakin!
Hey Guys,
I am running oVirt 3.1; and I am getting the following in dmesg, and I have pasted my VG info below. We are running iSCSI luns and this is a node complaining in the log. Our equalogic storage was close to full; and this started happening. I have extended it, and it is thin provisioned; but I am not sure if a pvresize is needed. If so, can it be done while online? Trying
Looking at your VG info you have at least 450GB free on every storage domain so free space doesn't seem like the issue. You have real I/O errors there. The problematic device is: /dev/mapper/36090a028108c2e4a2214151d000000c9 And it probably belongs to vg: 2b93dc46-0db7-4a2a-a494-a294452c9d2f If this device/vg is no longer relevant then run "dmsetup remove /dev/mapper/36090a028108c2e4a2214151d000000c9" If that doesn't work then you can try running it with '-f' (force). This will remove the device from the system. If it is relevant, then you need to fix the problem and make sure the system has access to it.
not to take down this cluster; several heavily used VMs on it. The VMs seem to be running ok for now; but this is filling up the logs pretty quickly; as it appears to be retrying to connections and to release the resources. All help is appreciated .
Thanks for all your help guys; and feel free to email me directly if I can provide any more info.
Log Snippet: Thread-550892::DEBUG::2013-07-17 19:06:14,335::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.69d66cf5-9426-4385-be6c-c55e2bd40617', Clearing records. Thread-550892::DEBUG::2013-07-17 19:06:14,335::task::978::TaskManager.Task::(_decref) Task=`03810e67-5dab-4193-8cbd-c633c8c25fa4`::ref 0 aborting False Thread-550892::DEBUG::2013-07-17 19:06:14,336::task::588::TaskManager.Task::(_updateState) Task=`7379fc72-5137-4432-861c-85b1f33ba843`::moving from state init -> state preparing Thread-550892::INFO::2013-07-17 19:06:14,336::logUtils::37::dispatcher::(wrapper) Run and protect: getVolumeSize(sdUUID='7cf8a585-34a8-470c-9b40-9d486456fcea', spUUID='2ecfa6dd-a1fa-428c-9d04-db17c6c4491d', imgUUID='8cbc183a-0afe-497a-a54b-ec5f03b7fe16', volUUID='bcb8d500-5aa4-404a-9be2-9fb84bc87300', options=None) Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::175::ResourceManager.Request::(__init__) ResName=`Storage.7cf8a585-34a8-470c-9b40-9d486456fcea`ReqID=`7411edba-20b8-4855-8271-b6aa52a9203a`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource' Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::486::ResourceManager::(registerResource) Trying to register resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' for lock type 'shared' Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::528::ResourceManager::(registerResource) Resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' is free. Now locking as 'shared' (1 active user) Thread-550892::DEBUG::2013-07-17 19:06:14,336::resourceManager::212::ResourceManager.Request::(grant) ResName=`Storage.7cf8a585-34a8-470c-9b40-9d486456fcea`ReqID=`7411edba-20b8-4855-8271-b6aa52a9203a`::Granted request Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::817::TaskManager.Task::(resourceAcquired) Task=`7379fc72-5137-4432-861c-85b1f33ba843`::_resourcesAcquired: Storage.7cf8a585-34a8-470c-9b40-9d486456fcea (shared) Thread-550892::DEBUG::2013-07-17 19:06:14,337::task::978::TaskManager.Task::(_decref) Task=`7379fc72-5137-4432-861c-85b1f33ba843`::ref 1 aborting False Thread-550892::INFO::2013-07-17 19:06:14,338::logUtils::39::dispatcher::(wrapper) Run and protect: getVolumeSize, Return response: {'truesize': '31138512896', 'apparentsize': '31138512896'} Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::1172::TaskManager.Task::(prepare) Task=`7379fc72-5137-4432-861c-85b1f33ba843`::finished: {'truesize': '31138512896', 'apparentsize': '31138512896'} Thread-550892::DEBUG::2013-07-17 19:06:14,338::task::588::TaskManager.Task::(_updateState) Task=`7379fc72-5137-4432-861c-85b1f33ba843`::moving from state preparing -> state finished Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea': < ResourceRef 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea', isValid: 'True' obj: 'None'>} Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::538::ResourceManager::(releaseResource) Trying to release resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::553::ResourceManager::(releaseResource) Released resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' (0 active users) Thread-550892::DEBUG::2013-07-17 19:06:14,338::resourceManager::558::ResourceManager::(releaseResource) Resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' is free, finding out if anyone is waiting for it. Thread-550892::DEBUG::2013-07-17 19:06:14,339::resourceManager::565::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea', Clearing records. Thread-550892::DEBUG::2013-07-17 19:06:14,339::task::978::TaskManager.Task::(_decref) Task=`7379fc72-5137-4432-861c-85b1f33ba843`::ref 0 aborting False
Dmesg Snippet: __ratelimit: 28 callbacks suppressed end_request: I/O error, dev dm-64, sector 5769088 end_request: I/O error, dev dm-64, sector 5769200 end_request: I/O error, dev dm-64, sector 5507072 end_request: I/O error, dev dm-64, sector 5507080 end_request: I/O error, dev dm-64, sector 5507072 end_request: I/O error, dev dm-64, sector 5506944 end_request: I/O error, dev dm-64, sector 5507056 end_request: I/O error, dev dm-64, sector 1312768 end_request: I/O error, dev dm-64, sector 1312776 end_request: I/O error, dev dm-64, sector 1312768 end_request: I/O error, dev dm-64, sector 6031232 end_request: I/O error, dev dm-64, sector 6031344 end_request: I/O error, dev dm-64, sector 5769216 end_request: I/O error, dev dm-64, sector 5769224 end_request: I/O error, dev dm-64, sector 5769216 end_request: I/O error, dev dm-64, sector 6293376 end_request: I/O error, dev dm-64, sector 6293488 end_request: I/O error, dev dm-64, sector 6031360 end_request: I/O error, dev dm-64, sector 6031368 end_request: I/O error, dev dm-64, sector 6031360 end_request: I/O error, dev dm-64, sector 8390528 end_request: I/O error, dev dm-64, sector 8390640 end_request: I/O error, dev dm-64, sector 6293504 end_request: I/O error, dev dm-64, sector 6293512 end_request: I/O error, dev dm-64, sector 6293504 __ratelimit: 25 callbacks suppressed
Thanks, Matt Curry Matt@MattCurry.com
This is a PRIVATE message. If you are not the intended recipient, please delete without copying and kindly advise us by e-mail of the mistake in delivery. NOTE: Regardless of content, this e-mail shall not operate to bind SKOPOS to any order or other contract unless pursuant to explicit written agreement or government initiative expressly permitting the use of e-mail for such purpose.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/17/2013 06:27 PM, Doron Fediuck wrote:
----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: board@ovirt.org, "users" <users@ovirt.org> | Sent: Wednesday, July 17, 2013 6:00:01 PM | Subject: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | Minutes: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | Minutes (text): | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | Log: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | | ============================ | #ovirt: oVirt Weekly Meeting | ============================ | | | Meeting started by mburns at 14:00:11 UTC. The full logs are available | at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | . | | | | Meeting summary | --------------- | * agenda and roll call (mburns, 14:00:23) | * 3.3 status update (mburns, 14:00:30) | * Workshops and Conferences (mburns, 14:00:47) | * infra update (mburns, 14:00:50) | * Other Topics (mburns, 14:00:53) | | * 3.3 status update (mburns, 14:03:07) | * otopi ovirt-host-deploy ovirt-image-uploader ovirt-iso-uploader | ovirt-log-collector all posted (mburns, 14:04:37) | * ovirt-node base images available, will produce vdsm images once vdsm | is available (mburns, 14:04:55)
Please be sure to pickup the latest mom (0.3.2.1 or later, which should be built today). This mom version includs vital ballooning support VDSM now needs.
Is this in Fedora/EL6? or do we need to carry it in ovirt.org repos?
| * vdsm rc1 does not include glance (mburns, 14:13:21) | * respin to happen to include glance changes (mburns, 14:13:30) | * ACTION: fsimonce to rebuild vdsm rc2 to include glance (mburns, | 14:13:47) | * ovirt scheduler (engine impact only) to merge today/tomorrow | (mburns, 14:14:17) | * scheduling api need 1.5-2 weeks (not a beta blocker) (mburns, | 14:14:34) | * cloud-init waiting on maintainer acceptance ( mpastern ) (mburns, | 14:16:27) | * neutron is merged (mburns, 14:18:34) | * guest-agent packages are pending, not blockers for beta (mburns, | 14:22:28) | * ovirt-engine build to happen after scheduler code gets in (mburns, | 14:26:05) | * reports was delivered to mgoldboi, mburns to get it and upload to | ovirt.org (mburns, 14:26:24) | * plan going forward (mburns, 14:47:18) | * merge scheduling stuff today/tomorrow (mburns, 14:47:30) | * build new vdsm and engine (mburns, 14:47:37) | * post everything up after build tomorrow (mburns, 14:47:45) | * send announcement that the beta is available with a disclaimer that | many features are not yet available in rest api (mburns, 14:48:23) | | * conferences and workshops (mburns, 14:49:05) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP closes on 21-July (mburns, | 14:49:39) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP takes place on 21-23 October | (mburns, 14:50:03) | * planning on a oVirt Developers meet-up during this timeframe | (mburns, 14:51:36) | | * Infra update (mburns, 14:51:57) | * there was no infra meeting this week due a lack of people (ewoud, | 14:53:29) | * fedora 19 slaves on rackspace servers were added and more and more | of their configuration is being puppeted (ewoud, 14:54:23) | * infra team looking for a new project coordinator, more details next | week (mburns, 14:56:01) | | * 3.3 readiness (revisited) (mburns, 14:56:22) | * new bug found in engine -- "New Host" does not work. (mburns, | 14:56:37) | * trivial fix (mburns, 14:56:40) | * oschreib to wait on engine build until the fix is merged (mburns, | 14:56:49) | | * other topics (mburns, 14:56:55) | | Meeting ended at 14:59:17 UTC. | | | | | Action Items | ------------ | * fsimonce to rebuild vdsm rc2 to include glance | | | | | Action Items, by person | ----------------------- | * fsimonce | * fsimonce to rebuild vdsm rc2 to include glance | * **UNASSIGNED** | * (none) | | | | | People Present (lines said) | --------------------------- | * mburns (116) | * danken (26) | * mskrivanek (21) | * abonas (9) | * oschreib (9) | * doron (8) | * ofri (8) | * lvernia (6) | * ewoud (4) | * fsimonce (4) | * hetz (4) | * mpastern (4) | * ovirtbot (3) | * sgotliv (2) | * dustins_ntap (1) | * ydary (1) | * ecohen (1) | * sahina (1) | * ofrenkel (1) | | | | | Generated by `MeetBot`_ 0.1.4 | | .. _`MeetBot`: http://wiki.debian.org/MeetBot | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users | _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 18/07/2013 13:29, Mike Burns ha scritto:
On 07/17/2013 06:27 PM, Doron Fediuck wrote:
----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: board@ovirt.org, "users" <users@ovirt.org> | Sent: Wednesday, July 17, 2013 6:00:01 PM | Subject: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | Minutes: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | Minutes (text): | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | Log: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | | ============================ | #ovirt: oVirt Weekly Meeting | ============================ | | | Meeting started by mburns at 14:00:11 UTC. The full logs are available | at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | . | | | | Meeting summary | --------------- | * agenda and roll call (mburns, 14:00:23) | * 3.3 status update (mburns, 14:00:30) | * Workshops and Conferences (mburns, 14:00:47) | * infra update (mburns, 14:00:50) | * Other Topics (mburns, 14:00:53) | | * 3.3 status update (mburns, 14:03:07) | * otopi ovirt-host-deploy ovirt-image-uploader ovirt-iso-uploader | ovirt-log-collector all posted (mburns, 14:04:37) | * ovirt-node base images available, will produce vdsm images once vdsm | is available (mburns, 14:04:55)
Please be sure to pickup the latest mom (0.3.2.1 or later, which should be built today). This mom version includs vital ballooning support VDSM now needs.
Is this in Fedora/EL6? or do we need to carry it in ovirt.org repos?
Fedora 18 can't install nightly anymore with missing dependency on this latest mom. Please add it to the repository.
| * vdsm rc1 does not include glance (mburns, 14:13:21) | * respin to happen to include glance changes (mburns, 14:13:30) | * ACTION: fsimonce to rebuild vdsm rc2 to include glance (mburns, | 14:13:47) | * ovirt scheduler (engine impact only) to merge today/tomorrow | (mburns, 14:14:17) | * scheduling api need 1.5-2 weeks (not a beta blocker) (mburns, | 14:14:34) | * cloud-init waiting on maintainer acceptance ( mpastern ) (mburns, | 14:16:27) | * neutron is merged (mburns, 14:18:34) | * guest-agent packages are pending, not blockers for beta (mburns, | 14:22:28) | * ovirt-engine build to happen after scheduler code gets in (mburns, | 14:26:05) | * reports was delivered to mgoldboi, mburns to get it and upload to | ovirt.org (mburns, 14:26:24) | * plan going forward (mburns, 14:47:18) | * merge scheduling stuff today/tomorrow (mburns, 14:47:30) | * build new vdsm and engine (mburns, 14:47:37) | * post everything up after build tomorrow (mburns, 14:47:45) | * send announcement that the beta is available with a disclaimer that | many features are not yet available in rest api (mburns, 14:48:23) | | * conferences and workshops (mburns, 14:49:05) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP closes on 21-July (mburns, | 14:49:39) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP takes place on 21-23 October | (mburns, 14:50:03) | * planning on a oVirt Developers meet-up during this timeframe | (mburns, 14:51:36) | | * Infra update (mburns, 14:51:57) | * there was no infra meeting this week due a lack of people (ewoud, | 14:53:29) | * fedora 19 slaves on rackspace servers were added and more and more | of their configuration is being puppeted (ewoud, 14:54:23) | * infra team looking for a new project coordinator, more details next | week (mburns, 14:56:01) | | * 3.3 readiness (revisited) (mburns, 14:56:22) | * new bug found in engine -- "New Host" does not work. (mburns, | 14:56:37) | * trivial fix (mburns, 14:56:40) | * oschreib to wait on engine build until the fix is merged (mburns, | 14:56:49) | | * other topics (mburns, 14:56:55) | | Meeting ended at 14:59:17 UTC. | | | | | Action Items | ------------ | * fsimonce to rebuild vdsm rc2 to include glance | | | | | Action Items, by person | ----------------------- | * fsimonce | * fsimonce to rebuild vdsm rc2 to include glance | * **UNASSIGNED** | * (none) | | | | | People Present (lines said) | --------------------------- | * mburns (116) | * danken (26) | * mskrivanek (21) | * abonas (9) | * oschreib (9) | * doron (8) | * ofri (8) | * lvernia (6) | * ewoud (4) | * fsimonce (4) | * hetz (4) | * mpastern (4) | * ovirtbot (3) | * sgotliv (2) | * dustins_ntap (1) | * ydary (1) | * ecohen (1) | * sahina (1) | * ofrenkel (1) | | | | | Generated by `MeetBot`_ 0.1.4 | | .. _`MeetBot`: http://wiki.debian.org/MeetBot | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users | _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On 07/18/2013 08:13 AM, Sandro Bonazzola wrote:
Il 18/07/2013 13:29, Mike Burns ha scritto:
On 07/17/2013 06:27 PM, Doron Fediuck wrote:
----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: board@ovirt.org, "users" <users@ovirt.org> | Sent: Wednesday, July 17, 2013 6:00:01 PM | Subject: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | Minutes: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | Minutes (text): | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | Log: | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | | ============================ | #ovirt: oVirt Weekly Meeting | ============================ | | | Meeting started by mburns at 14:00:11 UTC. The full logs are available | at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | . | | | | Meeting summary | --------------- | * agenda and roll call (mburns, 14:00:23) | * 3.3 status update (mburns, 14:00:30) | * Workshops and Conferences (mburns, 14:00:47) | * infra update (mburns, 14:00:50) | * Other Topics (mburns, 14:00:53) | | * 3.3 status update (mburns, 14:03:07) | * otopi ovirt-host-deploy ovirt-image-uploader ovirt-iso-uploader | ovirt-log-collector all posted (mburns, 14:04:37) | * ovirt-node base images available, will produce vdsm images once vdsm | is available (mburns, 14:04:55)
Please be sure to pickup the latest mom (0.3.2.1 or later, which should be built today). This mom version includs vital ballooning support VDSM now needs.
Is this in Fedora/EL6? or do we need to carry it in ovirt.org repos?
Fedora 18 can't install nightly anymore with missing dependency on this latest mom. Please add it to the repository.
Ok, that's f18. What about EL6 and F19? Where do I get the builds to post? Mike
| * vdsm rc1 does not include glance (mburns, 14:13:21) | * respin to happen to include glance changes (mburns, 14:13:30) | * ACTION: fsimonce to rebuild vdsm rc2 to include glance (mburns, | 14:13:47) | * ovirt scheduler (engine impact only) to merge today/tomorrow | (mburns, 14:14:17) | * scheduling api need 1.5-2 weeks (not a beta blocker) (mburns, | 14:14:34) | * cloud-init waiting on maintainer acceptance ( mpastern ) (mburns, | 14:16:27) | * neutron is merged (mburns, 14:18:34) | * guest-agent packages are pending, not blockers for beta (mburns, | 14:22:28) | * ovirt-engine build to happen after scheduler code gets in (mburns, | 14:26:05) | * reports was delivered to mgoldboi, mburns to get it and upload to | ovirt.org (mburns, 14:26:24) | * plan going forward (mburns, 14:47:18) | * merge scheduling stuff today/tomorrow (mburns, 14:47:30) | * build new vdsm and engine (mburns, 14:47:37) | * post everything up after build tomorrow (mburns, 14:47:45) | * send announcement that the beta is available with a disclaimer that | many features are not yet available in rest api (mburns, 14:48:23) | | * conferences and workshops (mburns, 14:49:05) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP closes on 21-July (mburns, | 14:49:39) | * KVM Forum/LinuxCon EU/CloudOpen EU CFP takes place on 21-23 October | (mburns, 14:50:03) | * planning on a oVirt Developers meet-up during this timeframe | (mburns, 14:51:36) | | * Infra update (mburns, 14:51:57) | * there was no infra meeting this week due a lack of people (ewoud, | 14:53:29) | * fedora 19 slaves on rackspace servers were added and more and more | of their configuration is being puppeted (ewoud, 14:54:23) | * infra team looking for a new project coordinator, more details next | week (mburns, 14:56:01) | | * 3.3 readiness (revisited) (mburns, 14:56:22) | * new bug found in engine -- "New Host" does not work. (mburns, | 14:56:37) | * trivial fix (mburns, 14:56:40) | * oschreib to wait on engine build until the fix is merged (mburns, | 14:56:49) | | * other topics (mburns, 14:56:55) | | Meeting ended at 14:59:17 UTC. | | | | | Action Items | ------------ | * fsimonce to rebuild vdsm rc2 to include glance | | | | | Action Items, by person | ----------------------- | * fsimonce | * fsimonce to rebuild vdsm rc2 to include glance | * **UNASSIGNED** | * (none) | | | | | People Present (lines said) | --------------------------- | * mburns (116) | * danken (26) | * mskrivanek (21) | * abonas (9) | * oschreib (9) | * doron (8) | * ofri (8) | * lvernia (6) | * ewoud (4) | * fsimonce (4) | * hetz (4) | * mpastern (4) | * ovirtbot (3) | * sgotliv (2) | * dustins_ntap (1) | * ydary (1) | * ecohen (1) | * sahina (1) | * ofrenkel (1) | | | | | Generated by `MeetBot`_ 0.1.4 | | .. _`MeetBot`: http://wiki.debian.org/MeetBot | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users | _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: "Sandro Bonazzola" <sbonazzo@redhat.com> | Cc: "users" <users@ovirt.org>, board@ovirt.org | Sent: Thursday, July 18, 2013 3:17:41 PM | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | On 07/18/2013 08:13 AM, Sandro Bonazzola wrote: | > Il 18/07/2013 13:29, Mike Burns ha scritto: | >> On 07/17/2013 06:27 PM, Doron Fediuck wrote: | >>> | >>> | >>> ----- Original Message ----- | >>> | From: "Mike Burns" <mburns@redhat.com> | >>> | To: board@ovirt.org, "users" <users@ovirt.org> | >>> | Sent: Wednesday, July 17, 2013 6:00:01 PM | >>> | Subject: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | >>> | | >>> | Minutes: | >>> | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | >>> | Minutes (text): | >>> | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | >>> | Log: | >>> | http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | >>> | | >>> | ============================ | >>> | #ovirt: oVirt Weekly Meeting | >>> | ============================ | >>> | | >>> | | >>> | Meeting started by mburns at 14:00:11 UTC. The full logs are available | >>> | at | >>> http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | >>> | . | >>> | | >>> | | >>> | | >>> | Meeting summary | >>> | --------------- | >>> | * agenda and roll call (mburns, 14:00:23) | >>> | * 3.3 status update (mburns, 14:00:30) | >>> | * Workshops and Conferences (mburns, 14:00:47) | >>> | * infra update (mburns, 14:00:50) | >>> | * Other Topics (mburns, 14:00:53) | >>> | | >>> | * 3.3 status update (mburns, 14:03:07) | >>> | * otopi ovirt-host-deploy ovirt-image-uploader ovirt-iso-uploader | >>> | ovirt-log-collector all posted (mburns, 14:04:37) | >>> | * ovirt-node base images available, will produce vdsm images | >>> once vdsm | >>> | is available (mburns, 14:04:55) | >>> | >>> Please be sure to pickup the latest mom (0.3.2.1 or later, which | >>> should be built | >>> today). This mom version includs vital ballooning support VDSM now | >>> needs. | >> | >> Is this in Fedora/EL6? or do we need to carry it in ovirt.org repos? | > | > Fedora 18 can't install nightly anymore with missing dependency on this | > latest mom. | > Please add it to the repository. | > | | Ok, that's f18. What about EL6 and F19? | | Where do I get the builds to post? | | Mike | Mike it's all available in koji: https://koji.fedoraproject.org/koji/packageinfo?packageID=12742 We're now waiting for a re-spin Adam needs to do, which will make mom and vdsm support ballooning. So we should have a newer vdsm taking the newer mom version. | > | > | > | >> | >>> | >>> | >>> | * vdsm rc1 does not include glance (mburns, 14:13:21) | >>> | * respin to happen to include glance changes (mburns, 14:13:30) | >>> | * ACTION: fsimonce to rebuild vdsm rc2 to include glance (mburns, | >>> | 14:13:47) | >>> | * ovirt scheduler (engine impact only) to merge today/tomorrow | >>> | (mburns, 14:14:17) | >>> | * scheduling api need 1.5-2 weeks (not a beta blocker) (mburns, | >>> | 14:14:34) | >>> | * cloud-init waiting on maintainer acceptance ( mpastern ) | >>> (mburns, | >>> | 14:16:27) | >>> | * neutron is merged (mburns, 14:18:34) | >>> | * guest-agent packages are pending, not blockers for beta (mburns, | >>> | 14:22:28) | >>> | * ovirt-engine build to happen after scheduler code gets in | >>> (mburns, | >>> | 14:26:05) | >>> | * reports was delivered to mgoldboi, mburns to get it and upload to | >>> | ovirt.org (mburns, 14:26:24) | >>> | * plan going forward (mburns, 14:47:18) | >>> | * merge scheduling stuff today/tomorrow (mburns, 14:47:30) | >>> | * build new vdsm and engine (mburns, 14:47:37) | >>> | * post everything up after build tomorrow (mburns, 14:47:45) | >>> | * send announcement that the beta is available with a disclaimer | >>> that | >>> | many features are not yet available in rest api (mburns, | >>> 14:48:23) | >>> | | >>> | * conferences and workshops (mburns, 14:49:05) | >>> | * KVM Forum/LinuxCon EU/CloudOpen EU CFP closes on 21-July | >>> (mburns, | >>> | 14:49:39) | >>> | * KVM Forum/LinuxCon EU/CloudOpen EU CFP takes place on 21-23 | >>> October | >>> | (mburns, 14:50:03) | >>> | * planning on a oVirt Developers meet-up during this timeframe | >>> | (mburns, 14:51:36) | >>> | | >>> | * Infra update (mburns, 14:51:57) | >>> | * there was no infra meeting this week due a lack of people | >>> (ewoud, | >>> | 14:53:29) | >>> | * fedora 19 slaves on rackspace servers were added and more and | >>> more | >>> | of their configuration is being puppeted (ewoud, 14:54:23) | >>> | * infra team looking for a new project coordinator, more details | >>> next | >>> | week (mburns, 14:56:01) | >>> | | >>> | * 3.3 readiness (revisited) (mburns, 14:56:22) | >>> | * new bug found in engine -- "New Host" does not work. (mburns, | >>> | 14:56:37) | >>> | * trivial fix (mburns, 14:56:40) | >>> | * oschreib to wait on engine build until the fix is merged | >>> (mburns, | >>> | 14:56:49) | >>> | | >>> | * other topics (mburns, 14:56:55) | >>> | | >>> | Meeting ended at 14:59:17 UTC. | >>> | | >>> | | >>> | | >>> | | >>> | Action Items | >>> | ------------ | >>> | * fsimonce to rebuild vdsm rc2 to include glance | >>> | | >>> | | >>> | | >>> | | >>> | Action Items, by person | >>> | ----------------------- | >>> | * fsimonce | >>> | * fsimonce to rebuild vdsm rc2 to include glance | >>> | * **UNASSIGNED** | >>> | * (none) | >>> | | >>> | | >>> | | >>> | | >>> | People Present (lines said) | >>> | --------------------------- | >>> | * mburns (116) | >>> | * danken (26) | >>> | * mskrivanek (21) | >>> | * abonas (9) | >>> | * oschreib (9) | >>> | * doron (8) | >>> | * ofri (8) | >>> | * lvernia (6) | >>> | * ewoud (4) | >>> | * fsimonce (4) | >>> | * hetz (4) | >>> | * mpastern (4) | >>> | * ovirtbot (3) | >>> | * sgotliv (2) | >>> | * dustins_ntap (1) | >>> | * ydary (1) | >>> | * ecohen (1) | >>> | * sahina (1) | >>> | * ofrenkel (1) | >>> | | >>> | | >>> | | >>> | | >>> | Generated by `MeetBot`_ 0.1.4 | >>> | | >>> | .. _`MeetBot`: http://wiki.debian.org/MeetBot | >>> | _______________________________________________ | >>> | Users mailing list | >>> | Users@ovirt.org | >>> | http://lists.ovirt.org/mailman/listinfo/users | >>> | | >>> _______________________________________________ | >>> Users mailing list | >>> Users@ovirt.org | >>> http://lists.ovirt.org/mailman/listinfo/users | >>> | >> | >> _______________________________________________ | >> Users mailing list | >> Users@ovirt.org | >> http://lists.ovirt.org/mailman/listinfo/users | > | > | | _______________________________________________ | Users mailing list | Users@ovirt.org | http://lists.ovirt.org/mailman/listinfo/users |

On Wed, Jul 17, 2013 at 11:00:01AM -0400, Mike Burns wrote:
Minutes: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html Minutes (text): http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt Log: http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html
============================ #ovirt: oVirt Weekly Meeting ============================
Meeting started by mburns at 14:00:11 UTC. The full logs are available at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html .
Meeting summary --------------- * agenda and roll call (mburns, 14:00:23) * 3.3 status update (mburns, 14:00:30) * Workshops and Conferences (mburns, 14:00:47) * infra update (mburns, 14:00:50) * Other Topics (mburns, 14:00:53)
Action Items ------------ * fsimonce to rebuild vdsm rc2 to include glance
I've tagged vdsm with rc2, however minutes later it came to my attention (thanks Meni), that Vdsm ties itself into a know when requested to create a bridgeless (non-Vm) network. A fix has been posted, http://gerrit.ovirt.org/17085/ but master branch of vdsm is NOT of beta quality at the moment.

----- Original Message ----- | From: "Dan Kenigsberg" <danken@redhat.com> | To: "Mike Burns" <mburns@redhat.com>, vdsm-devel@fedorahosted.org | Cc: board@ovirt.org, "users" <users@ovirt.org> | Sent: Thursday, July 18, 2013 6:31:52 PM | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | On Wed, Jul 17, 2013 at 11:00:01AM -0400, Mike Burns wrote: | > Minutes: | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | > Minutes (text): | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | > Log: | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > | > ============================ | > #ovirt: oVirt Weekly Meeting | > ============================ | > | > | > Meeting started by mburns at 14:00:11 UTC. The full logs are available | > at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > . | > | > | > | > Meeting summary | > --------------- | > * agenda and roll call (mburns, 14:00:23) | > * 3.3 status update (mburns, 14:00:30) | > * Workshops and Conferences (mburns, 14:00:47) | > * infra update (mburns, 14:00:50) | > * Other Topics (mburns, 14:00:53) | > | > | > | > | > Action Items | > ------------ | > * fsimonce to rebuild vdsm rc2 to include glance | | | I've tagged vdsm with rc2, however minutes later it came to my attention | (thanks Meni), that Vdsm ties itself into a know when requested to | create a bridgeless (non-Vm) network. | | A fix has been posted, | | http://gerrit.ovirt.org/17085/ | | but master branch of vdsm is NOT of beta quality | at the moment. Plus we have http://gerrit.ovirt.org/#/c/17044/ to use latest mom.

On 07/18/2013 11:59 AM, Doron Fediuck wrote:
----- Original Message ----- | From: "Dan Kenigsberg" <danken@redhat.com> | To: "Mike Burns" <mburns@redhat.com>, vdsm-devel@fedorahosted.org | Cc: board@ovirt.org, "users" <users@ovirt.org> | Sent: Thursday, July 18, 2013 6:31:52 PM | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | On Wed, Jul 17, 2013 at 11:00:01AM -0400, Mike Burns wrote: | > Minutes: | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | > Minutes (text): | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | > Log: | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > | > ============================ | > #ovirt: oVirt Weekly Meeting | > ============================ | > | > | > Meeting started by mburns at 14:00:11 UTC. The full logs are available | > at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > . | > | > | > | > Meeting summary | > --------------- | > * agenda and roll call (mburns, 14:00:23) | > * 3.3 status update (mburns, 14:00:30) | > * Workshops and Conferences (mburns, 14:00:47) | > * infra update (mburns, 14:00:50) | > * Other Topics (mburns, 14:00:53) | > | > | > | > | > Action Items | > ------------ | > * fsimonce to rebuild vdsm rc2 to include glance | | | I've tagged vdsm with rc2, however minutes later it came to my attention | (thanks Meni), that Vdsm ties itself into a know when requested to | create a bridgeless (non-Vm) network. | | A fix has been posted, | | http://gerrit.ovirt.org/17085/ | | but master branch of vdsm is NOT of beta quality | at the moment.
Plus we have http://gerrit.ovirt.org/#/c/17044/ to use latest mom.
Is there an ETA for a more stable vdsm?
_______________________________________________ Board mailing list Board@ovirt.org http://lists.ovirt.org/mailman/listinfo/board

----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: "Doron Fediuck" <dfediuck@redhat.com> | Cc: "Dan Kenigsberg" <danken@redhat.com>, vdsm-devel@fedorahosted.org, board@ovirt.org, "users" <users@ovirt.org> | Sent: Thursday, July 18, 2013 7:19:13 PM | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | On 07/18/2013 11:59 AM, Doron Fediuck wrote: | > | > | > ----- Original Message ----- | > | From: "Dan Kenigsberg" <danken@redhat.com> | > | To: "Mike Burns" <mburns@redhat.com>, vdsm-devel@fedorahosted.org | > | Cc: board@ovirt.org, "users" <users@ovirt.org> | > | Sent: Thursday, July 18, 2013 6:31:52 PM | > | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | > | | > | On Wed, Jul 17, 2013 at 11:00:01AM -0400, Mike Burns wrote: | > | > Minutes: | > | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | > | > Minutes (text): | > | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | > | > Log: | > | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > | > | > | > ============================ | > | > #ovirt: oVirt Weekly Meeting | > | > ============================ | > | > | > | > | > | > Meeting started by mburns at 14:00:11 UTC. The full logs are available | > | > at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > | > . | > | > | > | > | > | > | > | > Meeting summary | > | > --------------- | > | > * agenda and roll call (mburns, 14:00:23) | > | > * 3.3 status update (mburns, 14:00:30) | > | > * Workshops and Conferences (mburns, 14:00:47) | > | > * infra update (mburns, 14:00:50) | > | > * Other Topics (mburns, 14:00:53) | > | > | > | > | > | > | > | > | > | > Action Items | > | > ------------ | > | > * fsimonce to rebuild vdsm rc2 to include glance | > | | > | | > | I've tagged vdsm with rc2, however minutes later it came to my attention | > | (thanks Meni), that Vdsm ties itself into a know when requested to | > | create a bridgeless (non-Vm) network. | > | | > | A fix has been posted, | > | | > | http://gerrit.ovirt.org/17085/ | > | | > | but master branch of vdsm is NOT of beta quality | > | at the moment. | > | > Plus we have http://gerrit.ovirt.org/#/c/17044/ to use latest mom. | | Is there an ETA for a more stable vdsm? | mom part should be fine now. It's the network issue Dan reported that is pending now.

On Thu, Jul 18, 2013 at 12:29:57PM -0400, Doron Fediuck wrote:
----- Original Message ----- | From: "Mike Burns" <mburns@redhat.com> | To: "Doron Fediuck" <dfediuck@redhat.com> | Cc: "Dan Kenigsberg" <danken@redhat.com>, vdsm-devel@fedorahosted.org, board@ovirt.org, "users" <users@ovirt.org> | Sent: Thursday, July 18, 2013 7:19:13 PM | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | | On 07/18/2013 11:59 AM, Doron Fediuck wrote: | > | > | > ----- Original Message ----- | > | From: "Dan Kenigsberg" <danken@redhat.com> | > | To: "Mike Burns" <mburns@redhat.com>, vdsm-devel@fedorahosted.org | > | Cc: board@ovirt.org, "users" <users@ovirt.org> | > | Sent: Thursday, July 18, 2013 6:31:52 PM | > | Subject: Re: [Users] oVirt Weekly Meeting Minutes -- 2013-06-17 | > | | > | On Wed, Jul 17, 2013 at 11:00:01AM -0400, Mike Burns wrote: | > | > Minutes: | > | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.html | > | > Minutes (text): | > | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.txt | > | > Log: | > | > http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > | > | > | > ============================ | > | > #ovirt: oVirt Weekly Meeting | > | > ============================ | > | > | > | > | > | > Meeting started by mburns at 14:00:11 UTC. The full logs are available | > | > at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-17-14.00.log.html | > | > . | > | > | > | > | > | > | > | > Meeting summary | > | > --------------- | > | > * agenda and roll call (mburns, 14:00:23) | > | > * 3.3 status update (mburns, 14:00:30) | > | > * Workshops and Conferences (mburns, 14:00:47) | > | > * infra update (mburns, 14:00:50) | > | > * Other Topics (mburns, 14:00:53) | > | > | > | > | > | > | > | > | > | > Action Items | > | > ------------ | > | > * fsimonce to rebuild vdsm rc2 to include glance | > | | > | | > | I've tagged vdsm with rc2, however minutes later it came to my attention | > | (thanks Meni), that Vdsm ties itself into a know when requested to | > | create a bridgeless (non-Vm) network. | > | | > | A fix has been posted, | > | | > | http://gerrit.ovirt.org/17085/ | > | | > | but master branch of vdsm is NOT of beta quality | > | at the moment. | > | > Plus we have http://gerrit.ovirt.org/#/c/17044/ to use latest mom. | | Is there an ETA for a more stable vdsm? | mom part should be fine now. It's the network issue Dan reported that is pending now.
Actually, the mom part has been pushed immediately after the non-VM-network bugfix...
participants (6)
-
Ayal Baron
-
Dan Kenigsberg
-
Doron Fediuck
-
Matt Curry
-
Mike Burns
-
Sandro Bonazzola