
Hello, We're planning a move from our old building towards a new one a few meters away. In a similar way of Martijn (https://www.mail-archive.com/users@ovirt.org/msg33182.html), I have maintenance planed on our storage side. Say an oVirt DC is using a SAN's LUN via iSCSI (Equallogic). This SAN allows me to setup block replication between two SANs, seen by oVirt as one (Dell is naming it SyncRep). Then switch all the iSCSI accesses to the replicated LUN. When doing this, the iSCSI stack of each oVirt host notices the de-connection, tries to reconnect, and succeeds. Amongst our hosts, this happens between 4 and 15 seconds. When this happens fast enough, oVirt engine and the VMs don't even notice, and they keep running happily. When this takes more than 4 seconds, there are 2 cases : 1 - The hosts and/or oVirt and/or the SPM (I actually don't know) notices that there is a storage failure, and pauses the VMs. When the iSCSI stack reconnects, the VMs are automatically recovered from pause, and this all takes less than 30 seconds. That is very acceptable for us, as this action is extremely rare. 2 - Same storage failure, VMs paused, and some VMs stay in pause mode forever. Manual "run" action is mandatory. When done, everything recovers correctly. This is also quite acceptable, but here come my questions : My questions : (!) - *WHAT* process or piece of code or what oVirt parts is responsible for deciding when to UN-pause a VM, and at what conditions? That would help me to understand why some cases are working even more smoothly than others. - Are there related timeouts I could play with in engine-config options? - [a bit off-topic] Is it safe to increase some iSCSI timeouts of buffer-sizes in the hope this kind of disconnection would get un-noticed? -- Nicolas ECARNOT

Hi, you are aware of the fact that eql sync replication is just about replication, no single piece of high availability? i am not even sure if it does ip failover itself. so better think about minutes of interruptions than seconds. anyway, dont count on ovirts pause/unpause. theres a real chance that it will go horrible wrong. a scheduled maint. window where everything gets shut down whould be best practice Juergen Am 5/30/2016 um 3:07 PM schrieb Nicolas Ecarnot:
Hello,
We're planning a move from our old building towards a new one a few meters away.
In a similar way of Martijn (https://www.mail-archive.com/users@ovirt.org/msg33182.html), I have maintenance planed on our storage side.
Say an oVirt DC is using a SAN's LUN via iSCSI (Equallogic). This SAN allows me to setup block replication between two SANs, seen by oVirt as one (Dell is naming it SyncRep). Then switch all the iSCSI accesses to the replicated LUN.
When doing this, the iSCSI stack of each oVirt host notices the de-connection, tries to reconnect, and succeeds. Amongst our hosts, this happens between 4 and 15 seconds.
When this happens fast enough, oVirt engine and the VMs don't even notice, and they keep running happily.
When this takes more than 4 seconds, there are 2 cases :
1 - The hosts and/or oVirt and/or the SPM (I actually don't know) notices that there is a storage failure, and pauses the VMs. When the iSCSI stack reconnects, the VMs are automatically recovered from pause, and this all takes less than 30 seconds. That is very acceptable for us, as this action is extremely rare.
2 - Same storage failure, VMs paused, and some VMs stay in pause mode forever. Manual "run" action is mandatory. When done, everything recovers correctly. This is also quite acceptable, but here come my questions :
My questions : (!) - *WHAT* process or piece of code or what oVirt parts is responsible for deciding when to UN-pause a VM, and at what conditions? That would help me to understand why some cases are working even more smoothly than others. - Are there related timeouts I could play with in engine-config options? - [a bit off-topic] Is it safe to increase some iSCSI timeouts of buffer-sizes in the hope this kind of disconnection would get un-noticed?

Le 30/05/2016 15:30, InterNetX - Juergen Gotteswinter a écrit :
Hi,
you are aware of the fact that eql sync replication is just about replication, no single piece of high availability? i am not even sure if it does ip failover itself. so better think about minutes of interruptions than seconds.
Hi Juergen, I'm absolutely aware that there is no HA discussed here, at least in my mind. It does ip fail-over, but I'm not even blindly trusting it enough, that's why I'm doing numerous tests and measures. I'm gladly surprised by how the iSCSI stack is reacting, and its log files are readable enough for me to decide. Actually, I was more worrying about the iSCSI reconnection storm, but googling about it does not seem to get any warnings.
anyway, dont count on ovirts pause/unpause. theres a real chance that it will go horrible wrong. a scheduled maint. window where everything gets shut down whould be best practice
Indeed, this would the best choice, if I had it. -- Nicolas ECARNOT

Am 5/30/2016 um 3:59 PM schrieb Nicolas Ecarnot:
Le 30/05/2016 15:30, InterNetX - Juergen Gotteswinter a écrit :
Hi,
you are aware of the fact that eql sync replication is just about replication, no single piece of high availability? i am not even sure if it does ip failover itself. so better think about minutes of interruptions than seconds.
Hi Juergen,
I'm absolutely aware that there is no HA discussed here, at least in my mind. It does ip fail-over, but I'm not even blindly trusting it enough, that's why I'm doing numerous tests and measures. I'm gladly surprised by how the iSCSI stack is reacting, and its log files are readable enough for me to decide.
Actually, I was more worrying about the iSCSI reconnection storm, but googling about it does not seem to get any warnings.
This works pretty well with the Eql Boxes, except you use the EQL without Hit Kit. With installed HitKit on each Client i dont think that this will cause problems.
anyway, dont count on ovirts pause/unpause. theres a real chance that it will go horrible wrong. a scheduled maint. window where everything gets shut down whould be best practice
Indeed, this would the best choice, if I had it.

On Mon, May 30, 2016 at 4:07 PM, Nicolas Ecarnot <nicolas@ecarnot.net> wrote:
Hello,
We're planning a move from our old building towards a new one a few meters away.
In a similar way of Martijn (https://www.mail-archive.com/users@ovirt.org/msg33182.html), I have maintenance planed on our storage side.
Say an oVirt DC is using a SAN's LUN via iSCSI (Equallogic). This SAN allows me to setup block replication between two SANs, seen by oVirt as one (Dell is naming it SyncRep). Then switch all the iSCSI accesses to the replicated LUN.
When doing this, the iSCSI stack of each oVirt host notices the de-connection, tries to reconnect, and succeeds. Amongst our hosts, this happens between 4 and 15 seconds.
When this happens fast enough, oVirt engine and the VMs don't even notice, and they keep running happily.
When this takes more than 4 seconds, there are 2 cases :
1 - The hosts and/or oVirt and/or the SPM (I actually don't know) notices that there is a storage failure, and pauses the VMs. When the iSCSI stack reconnects, the VMs are automatically recovered from pause, and this all takes less than 30 seconds. That is very acceptable for us, as this action is extremely rare.
2 - Same storage failure, VMs paused, and some VMs stay in pause mode forever. Manual "run" action is mandatory. When done, everything recovers correctly. This is also quite acceptable, but here come my questions :
My questions : (!) - *WHAT* process or piece of code or what oVirt parts is responsible for deciding when to UN-pause a VM, and at what conditions?
Vms get paused by qemu, when you get ENOSPC or some other IO error. This probably happens when a vm is writing to storage, and all paths to storage are faulty - with current configuration, the scsi layer will fail after 5 seconds, and if no path is available, the write will fail. If vdsm storage monitoring system detected the issue, the storage domain will become invalid. When the storage domain will become valid again, we try to resume all vms paused because of IO errors. Storage monitoring is done every 10 seconds in normal conditions, but in current release, there can be delays of up to couple of minutes in extreme conditions, for example, 50 storage domains and doing lot of io. So basically, the storage domain monitor may miss an error on storage, never become invalid, and would never become valid again and the vm will have to be resumed manually. See https://bugzilla.redhat.com/1081962 In ovirt 4.0 monitoring should be improved, and will always monitor storage every 10 seconds, but even this cannot guarantee that we will detect all storage errors For example, if the storage outage is shorter then 10 seconds. But I guess that chance that storage outage was shorter then 10 seconds, but long enough to cause a vm to pause is very low.
That would help me to understand why some cases are working even more smoothly than others. - Are there related timeouts I could play with in engine-config options?
Nothing on the engine side...
- [a bit off-topic] Is it safe to increase some iSCSI timeouts of buffer-sizes in the hope this kind of disconnection would get un-noticed?
But you may modify multipath configuration on the host. We use now this multipath configuration (/etc/multipath.conf): # VDSM REVISION 1.3 defaults { polling_interval 5 no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 deferred_remove yes } devices { device { all_devs yes no_path_retry fail } } This enforces failing of io request on devices that by default will queue such requests for long or unlimited time. Queuing requests is very bad for vdsm, and cause various commands to block for minutes during storage outage, failing various flows in vdsm and the ui. See https://bugzilla.redhat.com/880738 However, in your case, using queuing may be the best way to do the switch from one storage to another in the smoothest way. You may try this setting: devices { device { all_devs yes no_path_retry 30 } } This will queue io requests for 30 seconds before failing. Using this normally would be a bad idea with vdsm, since during storage outage, vdsm may block for 30 seconds when no paths is available, and is not designed for this behavior, but blocking from time to time for short time should be ok. I think that modifying the configuration and reloading multipathd service should be enough to use the new settings, but I'm not sure if this changes existing sessions or open devices. Adding Ben to add more info about this. Nir

Le 30/05/2016 21:09, Nir Soffer wrote... SOME VERY VALUABLE ANSWERS! Thank you very much Nir, as your answers will give me food for thought for the weeks to come. It's late here, I'll begin checking all this tomorrow, but just a note :
This enforces failing of io request on devices that by default will queue such requests for long or unlimited time. Queuing requests is very bad for vdsm, and cause various commands to block for minutes during storage outage, failing various flows in vdsm and the ui. See https://bugzilla.redhat.com/880738
Though we own a Redhat customer active subscription, I logged in and yet I can not access the BZ above. I'm sure you can help :) -- Nicolas ECARNOT

On 30 May 2016, at 21:21, Nicolas Ecarnot <nicolas@ecarnot.net> wrote:
Le 30/05/2016 21:09, Nir Soffer wrote... SOME VERY VALUABLE ANSWERS!
Thank you very much Nir, as your answers will give me food for thought for the weeks to come.
It's late here, I'll begin checking all this tomorrow, but just a note :
This enforces failing of io request on devices that by default will queue such requests for long or unlimited time. Queuing requests is very bad for vdsm, and cause various commands to block for minutes during storage outage, failing various flows in vdsm and the ui. See https://bugzilla.redhat.com/880738
Though we own a Redhat customer active subscription, I logged in and yet I can not access the BZ above. I'm sure you can help :)
Hi Nicolas, ugh, there are procedural issues with that bug it seems. But in a nutshell, it is shipped fixed by [1] (so 3.6 release) and the actual fixes are [2] and [3] alternatively it’s described in ovirt bug [4] Thanks, michal [1] https://rhn.redhat.com/errata/RHBA-2016-0362.html [2] https://gerrit.ovirt.org/#/c/44855/ [3] https://gerrit.ovirt.org/#/c/42189/ [4] https://bugzilla.redhat.com/show_bug.cgi?id=1225162
-- Nicolas ECARNOT _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Mon, May 30, 2016 at 10:09:25PM +0300, Nir Soffer wrote:
On Mon, May 30, 2016 at 4:07 PM, Nicolas Ecarnot <nicolas@ecarnot.net> wrote:
Hello,
We're planning a move from our old building towards a new one a few meters away.
In a similar way of Martijn (https://www.mail-archive.com/users@ovirt.org/msg33182.html), I have maintenance planed on our storage side.
Say an oVirt DC is using a SAN's LUN via iSCSI (Equallogic). This SAN allows me to setup block replication between two SANs, seen by oVirt as one (Dell is naming it SyncRep). Then switch all the iSCSI accesses to the replicated LUN.
When doing this, the iSCSI stack of each oVirt host notices the de-connection, tries to reconnect, and succeeds. Amongst our hosts, this happens between 4 and 15 seconds.
When this happens fast enough, oVirt engine and the VMs don't even notice, and they keep running happily.
When this takes more than 4 seconds, there are 2 cases :
1 - The hosts and/or oVirt and/or the SPM (I actually don't know) notices that there is a storage failure, and pauses the VMs. When the iSCSI stack reconnects, the VMs are automatically recovered from pause, and this all takes less than 30 seconds. That is very acceptable for us, as this action is extremely rare.
2 - Same storage failure, VMs paused, and some VMs stay in pause mode forever. Manual "run" action is mandatory. When done, everything recovers correctly. This is also quite acceptable, but here come my questions :
My questions : (!) - *WHAT* process or piece of code or what oVirt parts is responsible for deciding when to UN-pause a VM, and at what conditions?
Vms get paused by qemu, when you get ENOSPC or some other IO error. This probably happens when a vm is writing to storage, and all paths to storage are faulty - with current configuration, the scsi layer will fail after 5 seconds, and if no path is available, the write will fail.
If vdsm storage monitoring system detected the issue, the storage domain will become invalid. When the storage domain will become valid again, we try to resume all vms paused because of IO errors.
Storage monitoring is done every 10 seconds in normal conditions, but in current release, there can be delays of up to couple of minutes in extreme conditions, for example, 50 storage domains and doing lot of io. So basically, the storage domain monitor may miss an error on storage, never become invalid, and would never become valid again and the vm will have to be resumed manually. See https://bugzilla.redhat.com/1081962
In ovirt 4.0 monitoring should be improved, and will always monitor storage every 10 seconds, but even this cannot guarantee that we will detect all storage errors For example, if the storage outage is shorter then 10 seconds. But I guess that chance that storage outage was shorter then 10 seconds, but long enough to cause a vm to pause is very low.
That would help me to understand why some cases are working even more smoothly than others. - Are there related timeouts I could play with in engine-config options?
Nothing on the engine side...
- [a bit off-topic] Is it safe to increase some iSCSI timeouts of buffer-sizes in the hope this kind of disconnection would get un-noticed?
But you may modify multipath configuration on the host.
We use now this multipath configuration (/etc/multipath.conf):
# VDSM REVISION 1.3
defaults { polling_interval 5 no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 deferred_remove yes }
devices { device { all_devs yes no_path_retry fail } }
This enforces failing of io request on devices that by default will queue such requests for long or unlimited time. Queuing requests is very bad for vdsm, and cause various commands to block for minutes during storage outage, failing various flows in vdsm and the ui. See https://bugzilla.redhat.com/880738
However, in your case, using queuing may be the best way to do the switch from one storage to another in the smoothest way.
You may try this setting:
devices { device { all_devs yes no_path_retry 30 } }
This will queue io requests for 30 seconds before failing. Using this normally would be a bad idea with vdsm, since during storage outage, vdsm may block for 30 seconds when no paths is available, and is not designed for this behavior, but blocking from time to time for short time should be ok.
I think that modifying the configuration and reloading multipathd service should be enough to use the new settings, but I'm not sure if this changes existing sessions or open devices.
Adding Ben to add more info about this.
Reloading the multipathd service will update this setting on all existing devices. Outside of vdsm, multipath settings like this are fairly common. So, from multipath's point of view this is completely reasonable. -Ben
Nir

--_000_14647123411738133camelacroniscom_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 UGxlYXNlIG5vdGUgdGhhdCBpdCdzIG5lY2Vzc2FyeSB0byBhZGQgYSBtYWdpYyBsaW5lICcjIFZE U00gUFJJVkFURScgYXMgc2Vjb25kIGxpbmUgaW4gL2V0Yy9tdWx0aXBhdGguY29uZi4gT3RoZXJ3 aXNlIHZkc20gd291bGQgb3ZlcndyaXRlIHlvdXIgc2V0dGluZ3MuDQpUaHVzLCAvZXRjL211bHRp cGF0aC5jb25mIHNob3VsZCBzdGFydCB3aXRoIHRoZSBmb2xsb3dpbmcgdHdvIGxpbmVzOg0KIyBW RFNNIFJFVklTSU9OIDEuMw0KIyBWRFNNIFBSSVZBVEUNCg0KDQpPbiBNb24sIDIwMTYtMDUtMzAg YXQgMjI6MDkgKzAzMDAsIE5pciBTb2ZmZXIgd3JvdGU6DQoNCkJ1dCB5b3UgbWF5IG1vZGlmeSBt dWx0aXBhdGggY29uZmlndXJhdGlvbiBvbiB0aGUgaG9zdC4NCg0KV2UgdXNlIG5vdyB0aGlzIG11 bHRpcGF0aCBjb25maWd1cmF0aW9uICgvZXRjL211bHRpcGF0aC5jb25mKToNCg0KIyBWRFNNIFJF VklTSU9OIDEuMw0KDQpkZWZhdWx0cyB7DQogICAgcG9sbGluZ19pbnRlcnZhbCAgICAgICAgICAg IDUNCiAgICBub19wYXRoX3JldHJ5ICAgICAgICAgICAgICAgZmFpbA0KICAgIHVzZXJfZnJpZW5k bHlfbmFtZXMgICAgICAgICBubw0KICAgIGZsdXNoX29uX2xhc3RfZGVsICAgICAgICAgICB5ZXMN CiAgICBmYXN0X2lvX2ZhaWxfdG1vICAgICAgICAgICAgNQ0KICAgIGRldl9sb3NzX3RtbyAgICAg ICAgICAgICAgICAzMA0KICAgIG1heF9mZHMgICAgICAgICAgICAgICAgICAgICA0MDk2DQogICAg ZGVmZXJyZWRfcmVtb3ZlICAgICAgICAgICAgIHllcw0KfQ0KDQpkZXZpY2VzIHsNCiAgICBkZXZp Y2Ugew0KICAgICAgICBhbGxfZGV2cyAgICAgICAgICAgICAgICB5ZXMNCiAgICAgICAgbm9fcGF0 aF9yZXRyeSAgICAgICAgICAgZmFpbA0KICAgIH0NCn0NCg0KVGhpcyBlbmZvcmNlcyBmYWlsaW5n IG9mIGlvIHJlcXVlc3Qgb24gZGV2aWNlcyB0aGF0IGJ5IGRlZmF1bHQgd2lsbCBxdWV1ZSBzdWNo DQpyZXF1ZXN0cyBmb3IgbG9uZyBvciB1bmxpbWl0ZWQgdGltZS4gUXVldWluZyByZXF1ZXN0cyBp cyB2ZXJ5IGJhZCBmb3IgdmRzbSwgYW5kDQpjYXVzZSB2YXJpb3VzIGNvbW1hbmRzIHRvIGJsb2Nr IGZvciBtaW51dGVzIGR1cmluZyBzdG9yYWdlIG91dGFnZSwNCmZhaWxpbmcgdmFyaW91cw0KZmxv d3MgaW4gdmRzbSBhbmQgdGhlIHVpLg0KU2VlIGh0dHBzOi8vYnVnemlsbGEucmVkaGF0LmNvbS84 ODA3MzgNCg0KSG93ZXZlciwgaW4geW91ciBjYXNlLCB1c2luZyBxdWV1aW5nIG1heSBiZSB0aGUg YmVzdCB3YXkgdG8gZG8gdGhlIHN3aXRjaA0KZnJvbSBvbmUgc3RvcmFnZSB0byBhbm90aGVyIGlu IHRoZSBzbW9vdGhlc3Qgd2F5Lg0KDQpZb3UgbWF5IHRyeSB0aGlzIHNldHRpbmc6DQoNCmRldmlj ZXMgew0KICAgIGRldmljZSB7DQogICAgICAgIGFsbF9kZXZzICAgICAgICAgICAgICAgIHllcw0K ICAgICAgICBub19wYXRoX3JldHJ5ICAgICAgICAgICAzMA0KICAgIH0NCn0NCg0KVGhpcyB3aWxs IHF1ZXVlIGlvIHJlcXVlc3RzIGZvciAzMCBzZWNvbmRzIGJlZm9yZSBmYWlsaW5nLg0KVXNpbmcg dGhpcyBub3JtYWxseSB3b3VsZCBiZSBhIGJhZCBpZGVhIHdpdGggdmRzbSwgc2luY2UgZHVyaW5n IHN0b3JhZ2Ugb3V0YWdlLA0KdmRzbSBtYXkgYmxvY2sgZm9yIDMwIHNlY29uZHMgd2hlbiBubyBw YXRocyBpcyBhdmFpbGFibGUsIGFuZCBpcyBub3QgZGVzaWduZWQNCmZvciB0aGlzIGJlaGF2aW9y LCBidXQgYmxvY2tpbmcgZnJvbSB0aW1lIHRvIHRpbWUgZm9yIHNob3J0IHRpbWUgc2hvdWxkIGJl IG9rLg0KDQpJIHRoaW5rIHRoYXQgbW9kaWZ5aW5nIHRoZSBjb25maWd1cmF0aW9uIGFuZCByZWxv YWRpbmcgbXVsdGlwYXRoZCBzZXJ2aWNlIHNob3VsZA0KYmUgZW5vdWdoIHRvIHVzZSB0aGUgbmV3 IHNldHRpbmdzLCBidXQgSSdtIG5vdCBzdXJlIGlmIHRoaXMgY2hhbmdlcw0KZXhpc3Rpbmcgc2Vz c2lvbnMNCm9yIG9wZW4gZGV2aWNlcy4NCg0KDQoNCg== --_000_14647123411738133camelacroniscom_ Content-Type: text/html; charset="utf-8" Content-ID: <206222F74072F048B72C6E762EF4BDE5@acronis.com> Content-Transfer-Encoding: base64 PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPGRpdj5QbGVhc2Ug bm90ZSB0aGF0IGl0J3MgbmVjZXNzYXJ5IHRvIGFkZCBhIG1hZ2ljIGxpbmUgJyMgVkRTTSBQUklW QVRFJyBhcyBzZWNvbmQgbGluZSBpbiAvZXRjL211bHRpcGF0aC5jb25mLiBPdGhlcndpc2UgdmRz bSB3b3VsZCBvdmVyd3JpdGUgeW91ciBzZXR0aW5ncy48L2Rpdj4NCjxkaXY+VGh1cywgL2V0Yy9t dWx0aXBhdGguY29uZiBzaG91bGQgc3RhcnQgd2l0aCB0aGUgZm9sbG93aW5nIHR3byBsaW5lczo8 L2Rpdj4NCjxkaXY+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj48L2Jsb2NrcXVvdGU+DQo8L2Rp dj4NCjxkaXY+IyBWRFNNIFJFVklTSU9OIDEuMzwvZGl2Pg0KPGRpdj4jIFZEU00gUFJJVkFURTwv ZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2Pk9uIE1vbiwg MjAxNi0wNS0zMCBhdCAyMjowOSAmIzQzOzAzMDAsIE5pciBTb2ZmZXIgd3JvdGU6PC9kaXY+DQo8 YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4NCjxwcmU+QnV0IHlvdSBtYXkgbW9kaWZ5IG11bHRpcGF0 aCBjb25maWd1cmF0aW9uIG9uIHRoZSBob3N0Lg0KDQpXZSB1c2Ugbm93IHRoaXMgbXVsdGlwYXRo IGNvbmZpZ3VyYXRpb24gKC9ldGMvbXVsdGlwYXRoLmNvbmYpOg0KDQojIFZEU00gUkVWSVNJT04g MS4zDQoNCmRlZmF1bHRzIHsNCiAgICBwb2xsaW5nX2ludGVydmFsICAgICAgICAgICAgNQ0KICAg IG5vX3BhdGhfcmV0cnkgICAgICAgICAgICAgICBmYWlsDQogICAgdXNlcl9mcmllbmRseV9uYW1l cyAgICAgICAgIG5vDQogICAgZmx1c2hfb25fbGFzdF9kZWwgICAgICAgICAgIHllcw0KICAgIGZh c3RfaW9fZmFpbF90bW8gICAgICAgICAgICA1DQogICAgZGV2X2xvc3NfdG1vICAgICAgICAgICAg ICAgIDMwDQogICAgbWF4X2ZkcyAgICAgICAgICAgICAgICAgICAgIDQwOTYNCiAgICBkZWZlcnJl ZF9yZW1vdmUgICAgICAgICAgICAgeWVzDQp9DQoNCmRldmljZXMgew0KICAgIGRldmljZSB7DQog ICAgICAgIGFsbF9kZXZzICAgICAgICAgICAgICAgIHllcw0KICAgICAgICBub19wYXRoX3JldHJ5 ICAgICAgICAgICBmYWlsDQogICAgfQ0KfQ0KDQpUaGlzIGVuZm9yY2VzIGZhaWxpbmcgb2YgaW8g cmVxdWVzdCBvbiBkZXZpY2VzIHRoYXQgYnkgZGVmYXVsdCB3aWxsIHF1ZXVlIHN1Y2gNCnJlcXVl c3RzIGZvciBsb25nIG9yIHVubGltaXRlZCB0aW1lLiBRdWV1aW5nIHJlcXVlc3RzIGlzIHZlcnkg YmFkIGZvciB2ZHNtLCBhbmQNCmNhdXNlIHZhcmlvdXMgY29tbWFuZHMgdG8gYmxvY2sgZm9yIG1p bnV0ZXMgZHVyaW5nIHN0b3JhZ2Ugb3V0YWdlLA0KZmFpbGluZyB2YXJpb3VzDQpmbG93cyBpbiB2 ZHNtIGFuZCB0aGUgdWkuDQpTZWUgPGEgaHJlZj0iaHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29t Lzg4MDczOCI+aHR0cHM6Ly9idWd6aWxsYS5yZWRoYXQuY29tLzg4MDczODwvYT4NCg0KSG93ZXZl ciwgaW4geW91ciBjYXNlLCB1c2luZyBxdWV1aW5nIG1heSBiZSB0aGUgYmVzdCB3YXkgdG8gZG8g dGhlIHN3aXRjaA0KZnJvbSBvbmUgc3RvcmFnZSB0byBhbm90aGVyIGluIHRoZSBzbW9vdGhlc3Qg d2F5Lg0KDQpZb3UgbWF5IHRyeSB0aGlzIHNldHRpbmc6DQoNCmRldmljZXMgew0KICAgIGRldmlj ZSB7DQogICAgICAgIGFsbF9kZXZzICAgICAgICAgICAgICAgIHllcw0KICAgICAgICBub19wYXRo X3JldHJ5ICAgICAgICAgICAzMA0KICAgIH0NCn0NCg0KVGhpcyB3aWxsIHF1ZXVlIGlvIHJlcXVl c3RzIGZvciAzMCBzZWNvbmRzIGJlZm9yZSBmYWlsaW5nLg0KVXNpbmcgdGhpcyBub3JtYWxseSB3 b3VsZCBiZSBhIGJhZCBpZGVhIHdpdGggdmRzbSwgc2luY2UgZHVyaW5nIHN0b3JhZ2Ugb3V0YWdl LA0KdmRzbSBtYXkgYmxvY2sgZm9yIDMwIHNlY29uZHMgd2hlbiBubyBwYXRocyBpcyBhdmFpbGFi bGUsIGFuZCBpcyBub3QgZGVzaWduZWQNCmZvciB0aGlzIGJlaGF2aW9yLCBidXQgYmxvY2tpbmcg ZnJvbSB0aW1lIHRvIHRpbWUgZm9yIHNob3J0IHRpbWUgc2hvdWxkIGJlIG9rLg0KDQpJIHRoaW5r IHRoYXQgbW9kaWZ5aW5nIHRoZSBjb25maWd1cmF0aW9uIGFuZCByZWxvYWRpbmcgbXVsdGlwYXRo ZCBzZXJ2aWNlIHNob3VsZA0KYmUgZW5vdWdoIHRvIHVzZSB0aGUgbmV3IHNldHRpbmdzLCBidXQg SSdtIG5vdCBzdXJlIGlmIHRoaXMgY2hhbmdlcw0KZXhpc3Rpbmcgc2Vzc2lvbnMNCm9yIG9wZW4g ZGV2aWNlcy4NCg0KDQo8L3ByZT4NCjwvYmxvY2txdW90ZT4NCjwvYm9keT4NCjwvaHRtbD4NCg== --_000_14647123411738133camelacroniscom_--
participants (6)
-
Benjamin Marzinski
-
InterNetX - Juergen Gotteswinter
-
Michal Skrivanek
-
Nicolas Ecarnot
-
Nir Soffer
-
Pavel Gashev