Re: [ovirt-users] ovirt and glusterfs setup

----_com.android.email_88339792828010 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: base64 CiAgICAKCldoYXQgaGUgc2FpZAoKSGFwcHkgQ29ubmVjdGluZy4gU2VudCBmcm9tIG15IFNwcmlu dCBTYW1zdW5nIEdhbGF4eSBTwq4gNQoKLS0tLS0tLS0gT3JpZ2luYWwgbWVzc2FnZSAtLS0tLS0t LQpGcm9tOiBTY290dCBXb3J0aGluZ3RvbiA8c2NvdHQuYy53b3J0aGluZ3RvbkBnbWFpbC5jb20+ IApEYXRlOiAwMi8xOC8yMDE1ICAyOjAzIFBNICAoR01ULTA3OjAwKSAKVG86IERvbm55IEQgPGRv bm55QGNsb3Vkc3Bpbi5tZT4gClN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNlcnNdIG92aXJ0IGFuZCBn bHVzdGVyZnMgc2V0dXAgCgo+IEkgZGlkIG5vdCBoYXZlIGEgZ29vZCBleHBlcmllbmNlIHB1dHRp bmcgYm90aCBnbHVzdGVyIGFuZCB2aXJ0IG9uIHRoZSBzYW1lCj4gbm9kZS4gSSB3YXMgZG9pbmcg aG9zdGVkIGVuZ2luZSB3aXRoIHJlcGxpY2F0ZSBhY3Jvc3MgdHdvIG5vZGVzLCBhbmQgb25lIGRh eQo+IGl0IHdlbnQgaW50byBzcGxpdCBicmFpbiBoZWxsLi4uIGkgd2FzIG5ldmVyIGFibGUgdG8g dHJhY2sgZG93biB3aHkuIEhvd2V2ZXIKPiBpIGRvIGhhdmUgYSBnbHVzdGVyIHdpdGggZGlzdHJp YnV0ZSBhbmQgcmVwbGljYSBzZXR1cCBvbiBpdHMgb3duIHdpdGggYQo+IGNvdXBsZSBub2Rlcywg YW5kIGl0IGhhcyBnaXZlbiBtZSB6ZXJvIHByb2JsZW1zIGluIHRoZSBsYXN0IDYwIGRheXMuIEl0 Cj4gc2VlbXMgdG8gbWUgdGhhdCBnbHVzdGVyIGFuZCB2aXJ0IG5lZWQgdG8gc3RheSBzZXBlcmF0 ZSBmb3Igbm93LiBCb3RoIGFyZQo+IGdyZWF0IHByb2R1Y3RzIGFuZCBib3RoIHdvcmsgYXMgZGVz Y3JpYmVkLCBqdXN0IG5vdCBvbiB0aGUgc2FtZSBub2RlIGF0IHRoZQo+IHNhbWUgdGltZS4KPgoK VGhlIGlzc3VlLCBhcyBJIHBlcmNlaXZlIGl0LCBpcyBuZXdiaWVzIGZpbmQgSmFzb24gQnJvb2sn cyBibG9nOgrCoCBodHRwOi8vY29tbXVuaXR5LnJlZGhhdC5jb20vYmxvZy8yMDE0LzExL3VwLWFu ZC1ydW5uaW5nLXdpdGgtb3ZpcnQtMy01LXBhcnQtdHdvLwoKQW5kIHRoZW4gbmV3YmllcyB0aGlu ayB0aGlzIFJlZCBIYXQgYmxvZyBpcyBwcm9kdWN0aW9uIHF1YWxpdHkuwqAgSW4gbXkKb3Bpbmlv biwgdGhlIGJsb2cgaG93LXRvIGlzIG9rYXkgKG5vdCByZWFsbHksIElNSE8pIGZvciBhIGxhYiwg YnV0IG5vdCBmb3IKcHJvZHVjdGlvbi4KClNpbmNlIGZlbmNpbmcgaXMgaW1wb3J0YW50IGluIG9W aXJ0LCBoYXZpbmcgZ2x1c3RlciBvbiB0aGUgaG9zdHMgaXMgYSBuby1ubwpzaW5jZSBhIG5vbi1y ZXNwb25zaXZlIGhvc3QgY291bGQgYmUgZmVuY2VkIGF0IGFueSB0aW1lIC0tIGFuZCB0aGUgZW5n aW5lCmNvdWxkIGZlbmNlIG11bHRpcGxlIGhvc3RzIChhbmQgYm9yayBhIGxvY2FsbHkgaG9zdGVk IGdsdXN0ZXIgZmlsZSBzeXN0ZW0gYW5kCnRoZW4gc2NyZXcgdXAgdGhlIGVudGlyZSBnbHVzdGVy IGNsdXN0ZXIpLgoKLS1TY290dFcK ----_com.android.email_88339792828010 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: base64 PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0 L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT4KICAgIAo8ZGl2Pjxicj48L2Rpdj48 ZGl2PldoYXQgaGUgc2FpZDwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXYg aWQ9ImNvbXBvc2VyX3NpZ25hdHVyZSI+PGRpdiBzdHlsZT0iZm9udC1zaXplOjg1JTtjb2xvcjoj NTc1NzU3Ij5IYXBweSBDb25uZWN0aW5nLiBTZW50IGZyb20gbXkgU3ByaW50IFNhbXN1bmcgR2Fs YXh5IFPCriA1PC9kaXY+PC9kaXY+PGJyPjxicj4tLS0tLS0tLSBPcmlnaW5hbCBtZXNzYWdlIC0t LS0tLS0tPGJyPkZyb206IFNjb3R0IFdvcnRoaW5ndG9uICZsdDtzY290dC5jLndvcnRoaW5ndG9u QGdtYWlsLmNvbSZndDsgPGJyPkRhdGU6IDAyLzE4LzIwMTUgIDI6MDMgUE0gIChHTVQtMDc6MDAp IDxicj5UbzogRG9ubnkgRCAmbHQ7ZG9ubnlAY2xvdWRzcGluLm1lJmd0OyA8YnI+U3ViamVjdDog UmU6IFtvdmlydC11c2Vyc10gb3ZpcnQgYW5kIGdsdXN0ZXJmcyBzZXR1cCA8YnI+PGJyPiZndDsg SSBkaWQgbm90IGhhdmUgYSBnb29kIGV4cGVyaWVuY2UgcHV0dGluZyBib3RoIGdsdXN0ZXIgYW5k IHZpcnQgb24gdGhlIHNhbWU8YnI+Jmd0OyBub2RlLiBJIHdhcyBkb2luZyBob3N0ZWQgZW5naW5l IHdpdGggcmVwbGljYXRlIGFjcm9zcyB0d28gbm9kZXMsIGFuZCBvbmUgZGF5PGJyPiZndDsgaXQg d2VudCBpbnRvIHNwbGl0IGJyYWluIGhlbGwuLi4gaSB3YXMgbmV2ZXIgYWJsZSB0byB0cmFjayBk b3duIHdoeS4gSG93ZXZlcjxicj4mZ3Q7IGkgZG8gaGF2ZSBhIGdsdXN0ZXIgd2l0aCBkaXN0cmli dXRlIGFuZCByZXBsaWNhIHNldHVwIG9uIGl0cyBvd24gd2l0aCBhPGJyPiZndDsgY291cGxlIG5v ZGVzLCBhbmQgaXQgaGFzIGdpdmVuIG1lIHplcm8gcHJvYmxlbXMgaW4gdGhlIGxhc3QgNjAgZGF5 cy4gSXQ8YnI+Jmd0OyBzZWVtcyB0byBtZSB0aGF0IGdsdXN0ZXIgYW5kIHZpcnQgbmVlZCB0byBz dGF5IHNlcGVyYXRlIGZvciBub3cuIEJvdGggYXJlPGJyPiZndDsgZ3JlYXQgcHJvZHVjdHMgYW5k IGJvdGggd29yayBhcyBkZXNjcmliZWQsIGp1c3Qgbm90IG9uIHRoZSBzYW1lIG5vZGUgYXQgdGhl PGJyPiZndDsgc2FtZSB0aW1lLjxicj4mZ3Q7PGJyPjxicj5UaGUgaXNzdWUsIGFzIEkgcGVyY2Vp dmUgaXQsIGlzIG5ld2JpZXMgZmluZCBKYXNvbiBCcm9vaydzIGJsb2c6PGJyPiZuYnNwOyBodHRw Oi8vY29tbXVuaXR5LnJlZGhhdC5jb20vYmxvZy8yMDE0LzExL3VwLWFuZC1ydW5uaW5nLXdpdGgt b3ZpcnQtMy01LXBhcnQtdHdvLzxicj48YnI+QW5kIHRoZW4gbmV3YmllcyB0aGluayB0aGlzIFJl ZCBIYXQgYmxvZyBpcyBwcm9kdWN0aW9uIHF1YWxpdHkuJm5ic3A7IEluIG15PGJyPm9waW5pb24s IHRoZSBibG9nIGhvdy10byBpcyBva2F5IChub3QgcmVhbGx5LCBJTUhPKSBmb3IgYSBsYWIsIGJ1 dCBub3QgZm9yPGJyPnByb2R1Y3Rpb24uPGJyPjxicj5TaW5jZSBmZW5jaW5nIGlzIGltcG9ydGFu dCBpbiBvVmlydCwgaGF2aW5nIGdsdXN0ZXIgb24gdGhlIGhvc3RzIGlzIGEgbm8tbm88YnI+c2lu Y2UgYSBub24tcmVzcG9uc2l2ZSBob3N0IGNvdWxkIGJlIGZlbmNlZCBhdCBhbnkgdGltZSAtLSBh bmQgdGhlIGVuZ2luZTxicj5jb3VsZCBmZW5jZSBtdWx0aXBsZSBob3N0cyAoYW5kIGJvcmsgYSBs b2NhbGx5IGhvc3RlZCBnbHVzdGVyIGZpbGUgc3lzdGVtIGFuZDxicj50aGVuIHNjcmV3IHVwIHRo ZSBlbnRpcmUgZ2x1c3RlciBjbHVzdGVyKS48YnI+PGJyPi0tU2NvdHRXPGJyPjwvYm9keT48L2h0 bWw+ ----_com.android.email_88339792828010--

On Feb 18, 2015, at 3:06 PM, Donny D <donny@cloudspin.me> wrote: =20 =20 What he said =20 =20 Happy Connecting. Sent from my Sprint Samsung Galaxy S=C2=AE 5 =20 =20 -------- Original message -------- From: Scott Worthington <scott.c.worthington@gmail.com>=20 Date: 02/18/2015 2:03 PM (GMT-07:00)=20 To: Donny D <donny@cloudspin.me>=20 Subject: Re: [ovirt-users] ovirt and glusterfs setup=20 =20
I did not have a good experience putting both gluster and virt on =
--Apple-Mail=_8AAB9C84-D655-4737-923B-CFB562B00CDF Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 For a somewhat dissenting opinion, I have a (currently) 2 node = gluster/compute system with a self hosted engine running and a 2nd = cluster of 3 hosts sharing the gluster servers from the first just fine = in production and a 3 node dev system. On top of ZFS even :) That said, I=E2=80=99ve broken it several times and had interesting = experiences fixing it. There are also 2 bugs out there that I=E2=80=99d = consider blockers for production use of gluster server/hosts with ovirt. = In particular: Bug=C2=A01172905 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1172905> (gluster vms = pause when vdsmd restarted) in combination with 1158108 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1158108> (vddm leaks = small amounts of memory) mean you have to be careful not to run out of = ram on a gluster serer/host node or you pause some VMs and have to = restart them to recover. I was already running this when this surfaced = in 3.5, so I=E2=80=99m limping along, but I wouldn=E2=80=99t setup this = config or use gluster in a new deployment right now. Side note, afaict = this doesn=E2=80=99t freeze VMs mounted via NFS from a gluster server, = so you can do that (and I=E2=80=99m working on migrating to it). And I = currently have Fencing disabled because it can be ugly on a gluster = system. The new arguments to prevent fencing if part of the cluster is = down should work around this, I=E2=80=99m just waiting until my 3rd = gluster node is online before going back to that. Ovirt won=E2=80=99t help you understand the best way to deploy your = gluster bricks either. For instance, your hosted engine should be on a = brick by itself, with n-way replication across any hosted engine = servers. Your VMs should probably be on a distributed-replicated or the = newly available dispersed mode volume to get the benefits of multiple = servers and not have to write to every brick all the time (unless your = use case demands 4 copies of your data for redundancy). Allocate extra = RAM for caching, too, it helps a lot. Proper setup of the server name = and use of localhost mounts, cddb, or keepalived (and an understanding = of why you want that) is important too. Bottom line, Gluster is complex no matter how easy the Ovirt interface = makes it look. If you aren=E2=80=99t prepared to get down and dirty with = your network file system, I wouldn't recommend this. If you are, it=E2=80=99= s good stuff, and I=E2=80=99m really looking forward to libgfapi = integration in Ovirt beyond the dev builds.=20 -Darrell the same
node. I was doing hosted engine with replicate across two nodes, and = one day it went into split brain hell... i was never able to track down why. = However i do have a gluster with distribute and replica setup on its own = with a couple nodes, and it has given me zero problems in the last 60 days. = It seems to me that gluster and virt need to stay seperate for now. = Both are great products and both work as described, just not on the same node = at the same time.
=20 The issue, as I perceive it, is newbies find Jason Brook's blog: = http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-par= t-two/ =20 And then newbies think this Red Hat blog is production quality. In my opinion, the blog how-to is okay (not really, IMHO) for a lab, but not = for production. =20 Since fencing is important in oVirt, having gluster on the hosts is a = no-no since a non-responsive host could be fenced at any time -- and the = engine could fence multiple hosts (and bork a locally hosted gluster file = system and then screw up the entire gluster cluster). =20 --ScottW _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_8AAB9C84-D655-4737-923B-CFB562B00CDF Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D"">For a somewhat dissenting opinion, I have a (currently) 2 = node gluster/compute system with a self hosted engine running and a 2nd = cluster of 3 hosts sharing the gluster servers from the first just fine = in production and a 3 node dev system. On top of ZFS even :)<div = class=3D""><br class=3D""></div><div class=3D"">That said, I=E2=80=99ve = broken it several times and had interesting experiences fixing it. There = are also 2 bugs out there that I=E2=80=99d consider blockers for = production use of gluster server/hosts with ovirt. In = particular: <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1172905" = class=3D"">Bug 1172905</a> (gluster vms pause when vdsmd = restarted) in combination with <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1158108" = class=3D"">1158108</a> (vddm leaks small amounts of memory) mean = you have to be careful not to run out of ram on a gluster serer/host = node or you pause some VMs and have to restart them to recover. I was = already running this when this surfaced in 3.5, so I=E2=80=99m limping = along, but I wouldn=E2=80=99t setup this config or use gluster in a new = deployment right now. Side note, afaict this doesn=E2=80=99t freeze VMs = mounted via NFS from a gluster server, so you can do that (and I=E2=80=99m= working on migrating to it). And I currently have Fencing disabled = because it can be ugly on a gluster system. The new arguments to prevent = fencing if part of the cluster is down should work around this, I=E2=80=99= m just waiting until my 3rd gluster node is online before going back to = that.</div><div class=3D""><br class=3D""></div><div class=3D"">Ovirt = won=E2=80=99t help you understand the best way to deploy your gluster = bricks either. For instance, your hosted engine should be on a brick by = itself, with n-way replication across any hosted engine servers. Your = VMs should probably be on a distributed-replicated or the newly = available dispersed mode volume to get the benefits of multiple servers = and not have to write to every brick all the time (unless your use case = demands 4 copies of your data for redundancy). Allocate extra RAM for = caching, too, it helps a lot. Proper setup of the server name and use of = localhost mounts, cddb, or keepalived (and an understanding of why you = want that) is important too.</div><div class=3D""><br = class=3D""></div><div class=3D"">Bottom line, Gluster is complex no = matter how easy the Ovirt interface makes it look. If you aren=E2=80=99t = prepared to get down and dirty with your network file system, I wouldn't = recommend this. If you are, it=E2=80=99s good stuff, and I=E2=80=99m = really looking forward to libgfapi integration in Ovirt beyond the dev = builds. </div><div class=3D""><br class=3D""></div><div = class=3D""> -Darrell</div><div class=3D""><br = class=3D""><div><blockquote type=3D"cite" class=3D""><div class=3D"">On = Feb 18, 2015, at 3:06 PM, Donny D <<a = href=3D"mailto:donny@cloudspin.me" class=3D"">donny@cloudspin.me</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><meta = http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF-8" = class=3D""><div class=3D""> =20 <div class=3D""><br class=3D""></div><div class=3D"">What he = said</div><div class=3D""><br class=3D""></div><div class=3D""><br = class=3D""></div><div id=3D"composer_signature" class=3D""><div = style=3D"font-size:85%;color:#575757" class=3D"">Happy Connecting. Sent = from my Sprint Samsung Galaxy S=C2=AE 5</div></div><br class=3D""><br = class=3D"">-------- Original message --------<br class=3D"">From: Scott = Worthington <<a href=3D"mailto:scott.c.worthington@gmail.com" = class=3D"">scott.c.worthington@gmail.com</a>> <br class=3D"">Date: = 02/18/2015 2:03 PM (GMT-07:00) <br class=3D"">To: Donny D <<a = href=3D"mailto:donny@cloudspin.me" class=3D"">donny@cloudspin.me</a>> = <br class=3D"">Subject: Re: [ovirt-users] ovirt and glusterfs setup <br = class=3D""><br class=3D"">> I did not have a good experience putting = both gluster and virt on the same<br class=3D"">> node. I was doing = hosted engine with replicate across two nodes, and one day<br = class=3D"">> it went into split brain hell... i was never able to = track down why. However<br class=3D"">> i do have a gluster with = distribute and replica setup on its own with a<br class=3D"">> couple = nodes, and it has given me zero problems in the last 60 days. It<br = class=3D"">> seems to me that gluster and virt need to stay seperate = for now. Both are<br class=3D"">> great products and both work as = described, just not on the same node at the<br class=3D"">> same = time.<br class=3D"">><br class=3D""><br class=3D"">The issue, as I = perceive it, is newbies find Jason Brook's blog:<br class=3D""> <a = href=3D"http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt= -3-5-part-two/" = class=3D"">http://community.redhat.com/blog/2014/11/up-and-running-with-ov= irt-3-5-part-two/</a><br class=3D""><br class=3D"">And then newbies = think this Red Hat blog is production quality. In my<br = class=3D"">opinion, the blog how-to is okay (not really, IMHO) for a = lab, but not for<br class=3D"">production.<br class=3D""><br = class=3D"">Since fencing is important in oVirt, having gluster on the = hosts is a no-no<br class=3D"">since a non-responsive host could be = fenced at any time -- and the engine<br class=3D"">could fence multiple = hosts (and bork a locally hosted gluster file system and<br = class=3D"">then screw up the entire gluster cluster).<br class=3D""><br = class=3D"">--ScottW<br = class=3D""></div>_______________________________________________<br = class=3D"">Users mailing list<br class=3D""><a = href=3D"mailto:Users@ovirt.org" class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></div></body></html>= --Apple-Mail=_8AAB9C84-D655-4737-923B-CFB562B00CDF--

--_000_502383933A554DD4B47AEA422688E16Epbcom_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 dGhhbmtzIGZvciBhbGwgdGhlIHJlcGxpZXMgb24gdGhpc+KApiAgdmVyeSBlbmxpZ2h0ZW5pbmcu DQoNCknigJl2ZSBnb3QgYSBmZXcgc2VydmVycyB0aGF0IEkgYW0gZ29pbmcgdG8gZGVwbG95IHdp dGgg4oCTIHNvbWUgZGVsbCAyOTUwcyBhbmQgbHVja2lseSB0aGV5IGNhbWUgb3V0IG9mIHByb2R1 Y3Rpb24gcmVjZW50bHkgYW5kIGFsbCBoYXZlIGR1YWwgMTBHYiBjYXJkcyBpbiB0aGVtLiAgSeKA mW0gbm90IHN1cmUgd2hhdCB0aGV5IGhhdmUgZm9yIGxvY2FsIHN0b3JhZ2UgeWV0LCBzbyBtYXkg bmVlZCB0byBiZWVmIHRoYXQgdXAsIGJ1dCBJIHRoaW5rIGl04oCZcyBnb2luZyB0byBiZSBhIGdv b2QgYmFzZSBsYWIuDQoNCkkgYW0gbm90IHBsYW5uaW5nIG9uIGFueXRoaW5nIHByb2R1Y3Rpb24s IGp1c3Qgd2FudCB0byBnZXQgYWhlYWQgb2YgdGhlIGdhbWUgYW5kIHRoaXMgaXMgcHJldHR5IG11 Y2gganVzdCBwcm9vZiBvZiBjb25jZXB0IGF0IHRoaXMgc3RhZ2Ug4oCTIG1heWJlIGluIGEgeWVh cnMgdGltZSBvciBzbywgb25jZSBSZWRoYXQgaGF2ZSBjZXJ0aWZpZWQgaXQsIGlmIHRoZXkgZG/i gKYNCg0KVGhhbmtzDQpCaWxsDQoNCg0KRnJvbTogRGFycmVsbCBCdWRpYw0KRGF0ZTogV2VkbmVz ZGF5LCBGZWJydWFyeSAxOCwgMjAxNSBhdCA5OjUxIFBNDQpUbzogRG9ubnkgRA0KQ2M6ICJ1c2Vy c0BvdmlydC5vcmc8bWFpbHRvOnVzZXJzQG92aXJ0Lm9yZz4iDQpTdWJqZWN0OiBSZTogW292aXJ0 LXVzZXJzXSBvdmlydCBhbmQgZ2x1c3RlcmZzIHNldHVwDQoNCnlvdQ0KX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX18NCg0K --_000_502383933A554DD4B47AEA422688E16Epbcom_ Content-Type: text/html; charset="utf-8" Content-ID: <2DF1B9D17A64C04FAD4FE7333DEAE7C8@mgd.pb.com> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgLXdlYmtpdC1saW5lLWJy ZWFrOiBhZnRlci13aGl0ZS1zcGFjZTsgY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1zaXplOiAx NHB4OyBmb250LWZhbWlseTogQ2FsaWJyaSwgc2Fucy1zZXJpZjsiPg0KPGRpdj4NCjxkaXY+dGhh bmtzIGZvciBhbGwgdGhlIHJlcGxpZXMgb24gdGhpc+KApiAmbmJzcDt2ZXJ5IGVubGlnaHRlbmlu Zy48L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PknigJl2ZSBnb3QgYSBmZXcgc2VydmVy cyB0aGF0IEkgYW0gZ29pbmcgdG8gZGVwbG95IHdpdGgg4oCTIHNvbWUgZGVsbCAyOTUwcyBhbmQg bHVja2lseSB0aGV5IGNhbWUgb3V0IG9mIHByb2R1Y3Rpb24gcmVjZW50bHkgYW5kIGFsbCBoYXZl IGR1YWwgMTBHYiBjYXJkcyBpbiB0aGVtLiAmbmJzcDtJ4oCZbSBub3Qgc3VyZSB3aGF0IHRoZXkg aGF2ZSBmb3IgbG9jYWwgc3RvcmFnZSB5ZXQsIHNvIG1heSBuZWVkIHRvIGJlZWYgdGhhdCB1cCwg YnV0IEkgdGhpbmsNCiBpdOKAmXMgZ29pbmcgdG8gYmUgYSBnb29kIGJhc2UgbGFiLjwvZGl2Pg0K PGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+SSBhbSBub3QgcGxhbm5pbmcgb24gYW55dGhpbmcgcHJv ZHVjdGlvbiwganVzdCB3YW50IHRvIGdldCBhaGVhZCBvZiB0aGUgZ2FtZSBhbmQgdGhpcyBpcyBw cmV0dHkgbXVjaCBqdXN0IHByb29mIG9mIGNvbmNlcHQgYXQgdGhpcyBzdGFnZSDigJMgbWF5YmUg aW4gYSB5ZWFycyB0aW1lIG9yIHNvLCBvbmNlIFJlZGhhdCBoYXZlIGNlcnRpZmllZCBpdCwgaWYg dGhleSBkb+KApiZuYnNwOzwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+VGhhbmtzPC9k aXY+DQo8ZGl2PkJpbGw8L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2Pg0KPGRpdiBpZD0i TUFDX09VVExPT0tfU0lHTkFUVVJFIj48L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8ZGl2Pjxicj4N CjwvZGl2Pg0KPHNwYW4gaWQ9Ik9MS19TUkNfQk9EWV9TRUNUSU9OIj4NCjxkaXYgc3R5bGU9ImZv bnQtZmFtaWx5OkNhbGlicmk7IGZvbnQtc2l6ZToxMnB0OyB0ZXh0LWFsaWduOmxlZnQ7IGNvbG9y OmJsYWNrOyBCT1JERVItQk9UVE9NOiBtZWRpdW0gbm9uZTsgQk9SREVSLUxFRlQ6IG1lZGl1bSBu b25lOyBQQURESU5HLUJPVFRPTTogMGluOyBQQURESU5HLUxFRlQ6IDBpbjsgUEFERElORy1SSUdI VDogMGluOyBCT1JERVItVE9QOiAjYjVjNGRmIDFwdCBzb2xpZDsgQk9SREVSLVJJR0hUOiBtZWRp dW0gbm9uZTsgUEFERElORy1UT1A6IDNwdCI+DQo8c3BhbiBzdHlsZT0iZm9udC13ZWlnaHQ6Ym9s ZCI+RnJvbTogPC9zcGFuPkRhcnJlbGwgQnVkaWM8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC13ZWln aHQ6Ym9sZCI+RGF0ZTogPC9zcGFuPldlZG5lc2RheSwgRmVicnVhcnkgMTgsIDIwMTUgYXQgOTo1 MSBQTTxicj4NCjxzcGFuIHN0eWxlPSJmb250LXdlaWdodDpib2xkIj5UbzogPC9zcGFuPkRvbm55 IEQ8YnI+DQo8c3BhbiBzdHlsZT0iZm9udC13ZWlnaHQ6Ym9sZCI+Q2M6IDwvc3Bhbj4mcXVvdDs8 YSBocmVmPSJtYWlsdG86dXNlcnNAb3ZpcnQub3JnIj51c2Vyc0BvdmlydC5vcmc8L2E+JnF1b3Q7 PGJyPg0KPHNwYW4gc3R5bGU9ImZvbnQtd2VpZ2h0OmJvbGQiPlN1YmplY3Q6IDwvc3Bhbj5SZTog W292aXJ0LXVzZXJzXSBvdmlydCBhbmQgZ2x1c3RlcmZzIHNldHVwPGJyPg0KPC9kaXY+DQo8ZGl2 Pjxicj4NCjwvZGl2Pg0KPHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFt aWx5OiBDYWxpYnJpOyBmb250LXNpemU6IG1lZGl1bTsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250 LXZhcmlhbnQ6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5v cm1hbDsgbGluZS1oZWlnaHQ6IG5vcm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3Rh cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog bm9ybWFsOyB3aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ry b2tlLXdpZHRoOiAwcHg7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyBmbG9hdDogbm9uZTsi PnlvdTwvc3Bhbj48L3NwYW4+DQo8YnI+DQo8aHI+DQo8Zm9udCBmYWNlPSJBcmlhbCIgY29sb3I9 IkJsYWNrIiBzaXplPSIyIj48YnI+DQo8L2ZvbnQ+DQo8L2JvZHk+DQo8L2h0bWw+DQo= --_000_502383933A554DD4B47AEA422688E16Epbcom_--
participants (3)
-
Bill Dossett
-
Darrell Budic
-
Donny D