Re: Poor gluster performances over 10Gbps network

Hi Paul, thank you for your answer. Indeed I did a `dd` inside a VM to measure the Gluster disk performance. I've also tried `dd` on an hypervisor writing into one of the replicated gluster bricks, which gives good performances (similar to logical volume ones). Le 08/09/2021 à 12:51, Staniforth, Paul a écrit :
Hi Mathieu, How are you measuring the Gluster disk performance? also using dd you should use the oflag=dsync
to avoid buffer caching.
Regards,
Paul S ------------------------------------------------------------------------ *From:* Mathieu Valois <mvalois@teicee.com> *Sent:* 08 September 2021 10:12 *To:* users <users@ovirt.org> *Subject:* [ovirt-users] Poor gluster performances over 10Gbps network
*Caution External Mail:* Do not click any links or open any attachments unless you trust the sender and know that the content is safe.
Sorry for double post but I don't know if this mail has been received.
Hello everyone,
I know this issue was already treated on this mailing list. However none of the proposed solutions is satisfying me.
Here is my situation : I've got 3 hyperconverged gluster ovirt nodes, with 6 network interfaces, bounded in bunches of 2 (management, VMs and gluster). The gluster network is on a dedicated bound where the 2 interfaces are directly connected to the 2 other ovirt nodes. Gluster is apparently using it :
# gluster volume status vmstore Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster-ov1:/gluster_bricks /vmstore/vmstore 49152 0 Y 3019 Brick gluster-ov2:/gluster_bricks /vmstore/vmstore 49152 0 Y 3009 Brick gluster-ov3:/gluster_bricks /vmstore/vmstore
where 'gluster-ov{1,2,3}' are domain names referencing nodes in the gluster network. This networks has 10Gbps capabilities :
# iperf3 -c gluster-ov3 Connecting to host gluster-ov3, port 5201 [ 5] local 10.20.0.50 port 46220 connected to 10.20.0.51 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.16 GBytes 9.92 Gbits/sec 17 900 KBytes [ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec 0 900 KBytes [ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec 4 996 KBytes [ 5] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec 1 996 KBytes [ 5] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec 0 996 KBytes [ 5] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes [ 5] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes [ 5] 7.00-8.00 sec 1.15 GBytes 9.91 Gbits/sec 0 996 KBytes [ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes [ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec 22 sender [ 5] 0.00-10.04 sec 11.5 GBytes 9.86 Gbits/sec receiver
iperf Done.
However, VMs stored on the vmstore gluster volume has poor write performances, oscillating between 100KBps and 30MBps. I almost always observe a write spike (180Mbps) at the beginning until around 500MB written, then it drastically falls at 10MBps, sometimes even less (100KBps). Hypervisors have 32 threads (2 sockets, 8 cores per socket, 2 threads per core).
Here is the volume settings :
Volume Name: vmstore Type: Replicate Volume ID: XXX Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: gluster-ov1:/gluster_bricks/vmstore/vmstore Brick2: gluster-ov2:/gluster_bricks/vmstore/vmstore Brick3: gluster-ov3:/gluster_bricks/vmstore/vmstore Options Reconfigured: performance.io-thread-count: 32 # was 16 by default. cluster.granular-entry-heal: enable storage.owner-gid: 36 storage.owner-uid: 36 cluster.lookup-optimize: off server.keepalive-count: 5 server.keepalive-interval: 2 server.keepalive-time: 10 server.tcp-user-timeout: 20 network.ping-timeout: 30 server.event-threads: 4 client.event-threads: 8 # was 4 by default cluster.choose-local: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable performance.strict-o-direct: on network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off auth.allow: * user.cifs: off storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: on
When I naively write directly on the logical volume, which is mounted on a material RAID5 3-disks array, I have interesting performances:
# dd if=/dev/zero of=a bs=4M count=2048 2048+0 records in 2048+0 records out 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 17.2485 s, 498 MB/s #urandom gives around 200MBps
Moreover, hypervisors have SSD which have been configured as lvcache, but I'm unsure how to test it efficiently.
I can't find where is the problem, as every piece of the chain is apparently doing well ...
Thanks anyone for helping me :)
Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760 Bretteville-sur-Odon Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré 02 72 34 13 20 | www.teicee.com <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.teicee.com%2F%3Fpk_campaign%3DEmail&data=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf0749f949cf140395b2308d972a999d9%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666895046954316%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=8bf8SEVkTfx0PhhkwZaa0r6nHzvUihjhKSBCPyGbjF8%3D&reserved=0>
téïcée sur facebook <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.facebook.com%2Fteicee&data=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf0749f949cf140395b2308d972a999d9%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666895046954316%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=LWrXzrizq5uLHHjaIGiUKJgFYXyH0pFph4z5%2FO%2FF%2B5Y%3D&reserved=0> téïcée sur twitter <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2FTeicee_fr&data=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf0749f949cf140395b2308d972a999d9%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666895046964307%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=QXU%2FpF2IKp2ruHnTbOuljhvttRVwlXxHHk9SjkK4aHE%3D&reserved=0> téïcée sur linkedin <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Ft-c-e&data=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf0749f949cf140395b2308d972a999d9%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666895046964307%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=TCh4jXaHaZCFhOLtgMXr6dttgHqg2VI%2F4f2%2B1pIKv4g%3D&reserved=0> téïcée sur viadeo <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffr.viadeo.com%2Ffr%2Fcompany%2Fteicee&data=04%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cf0749f949cf140395b2308d972a999d9%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666895046974304%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=PSEaTH2hH7OgqMb2JsOomdF%2FrShy%2FOahqXsSgNk7F6c%3D&reserved=0> Datadocké
To view the terms under which this email is distributed, please go to:- https://leedsbeckett.ac.uk/disclaimer/email <https://leedsbeckett.ac.uk/disclaimer/email>
-- téïcée <https://www.teicee.com/?pk_campaign=Email> *Mathieu Valois* Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760 Bretteville-sur-Odon Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré 02 72 34 13 20 | www.teicee.com <https://www.teicee.com/?pk_campaign=Email> téïcée sur facebook <https://www.facebook.com/teicee> téïcée sur twitter <https://twitter.com/Teicee_fr> téïcée sur linkedin <https://www.linkedin.com/company/t-c-e> téïcée sur viadeo <https://fr.viadeo.com/fr/company/teicee> Datadocké

Hi Mathieu, with a linux VM using dd without the oflag=sync will mean it's using disk bufffers, hence the faster thoughput at the beginning until the buffers are full. Regards, Paul S. ________________________________ From: Mathieu Valois <mvalois@teicee.com> Sent: 08 September 2021 11:56 To: Staniforth, Paul <P.Staniforth@leedsbeckett.ac.uk>; users <users@ovirt.org> Subject: Re: [ovirt-users] Poor gluster performances over 10Gbps network Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe. Hi Paul, thank you for your answer. Indeed I did a `dd` inside a VM to measure the Gluster disk performance. I've also tried `dd` on an hypervisor writing into one of the replicated gluster bricks, which gives good performances (similar to logical volume ones). Le 08/09/2021 à 12:51, Staniforth, Paul a écrit : Hi Mathieu, How are you measuring the Gluster disk performance? also using dd you should use the oflag=dsync to avoid buffer caching. Regards, Paul S ________________________________ From: Mathieu Valois <mvalois@teicee.com><mailto:mvalois@teicee.com> Sent: 08 September 2021 10:12 To: users <users@ovirt.org><mailto:users@ovirt.org> Subject: [ovirt-users] Poor gluster performances over 10Gbps network Caution External Mail: Do not click any links or open any attachments unless you trust the sender and know that the content is safe. Sorry for double post but I don't know if this mail has been received. Hello everyone, I know this issue was already treated on this mailing list. However none of the proposed solutions is satisfying me. Here is my situation : I've got 3 hyperconverged gluster ovirt nodes, with 6 network interfaces, bounded in bunches of 2 (management, VMs and gluster). The gluster network is on a dedicated bound where the 2 interfaces are directly connected to the 2 other ovirt nodes. Gluster is apparently using it : # gluster volume status vmstore Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gluster-ov1:/gluster_bricks /vmstore/vmstore 49152 0 Y 3019 Brick gluster-ov2:/gluster_bricks /vmstore/vmstore 49152 0 Y 3009 Brick gluster-ov3:/gluster_bricks /vmstore/vmstore where 'gluster-ov{1,2,3}' are domain names referencing nodes in the gluster network. This networks has 10Gbps capabilities : # iperf3 -c gluster-ov3 Connecting to host gluster-ov3, port 5201 [ 5] local 10.20.0.50 port 46220 connected to 10.20.0.51 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.16 GBytes 9.92 Gbits/sec 17 900 KBytes [ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec 0 900 KBytes [ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec 4 996 KBytes [ 5] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec 1 996 KBytes [ 5] 4.00-5.00 sec 1.15 GBytes 9.89 Gbits/sec 0 996 KBytes [ 5] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes [ 5] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes [ 5] 7.00-8.00 sec 1.15 GBytes 9.91 Gbits/sec 0 996 KBytes [ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes [ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 996 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec 22 sender [ 5] 0.00-10.04 sec 11.5 GBytes 9.86 Gbits/sec receiver iperf Done. However, VMs stored on the vmstore gluster volume has poor write performances, oscillating between 100KBps and 30MBps. I almost always observe a write spike (180Mbps) at the beginning until around 500MB written, then it drastically falls at 10MBps, sometimes even less (100KBps). Hypervisors have 32 threads (2 sockets, 8 cores per socket, 2 threads per core). Here is the volume settings : Volume Name: vmstore Type: Replicate Volume ID: XXX Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: gluster-ov1:/gluster_bricks/vmstore/vmstore Brick2: gluster-ov2:/gluster_bricks/vmstore/vmstore Brick3: gluster-ov3:/gluster_bricks/vmstore/vmstore Options Reconfigured: performance.io-thread-count: 32 # was 16 by default. cluster.granular-entry-heal: enable storage.owner-gid: 36 storage.owner-uid: 36 cluster.lookup-optimize: off server.keepalive-count: 5 server.keepalive-interval: 2 server.keepalive-time: 10 server.tcp-user-timeout: 20 network.ping-timeout: 30 server.event-threads: 4 client.event-threads: 8 # was 4 by default cluster.choose-local: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable performance.strict-o-direct: on network.remote-dio: off performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off auth.allow: * user.cifs: off storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: on When I naively write directly on the logical volume, which is mounted on a material RAID5 3-disks array, I have interesting performances: # dd if=/dev/zero of=a bs=4M count=2048 2048+0 records in 2048+0 records out 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 17.2485 s, 498 MB/s #urandom gives around 200MBps Moreover, hypervisors have SSD which have been configured as lvcache, but I'm unsure how to test it efficiently. I can't find where is the problem, as every piece of the chain is apparently doing well ... Thanks anyone for helping me :) -- [téïcée]<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.teicee.com%2F%3Fpk_campaign%3DEmail&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953658980281%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Xb4zs12CHeEMlUNXoxcpuRfad7zQrpeJLPaz5k0g2XI%3D&reserved=0> Mathieu Valois Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760 Bretteville-sur-Odon Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré 02 72 34 13 20 | www.teicee.com<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.teicee.com%2F%3Fpk_campaign%3DEmail&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953658990273%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=GjWLrgcZYaIrmH9BXbNyiPP%2BGQ4jeZZKARl0WOM6Hh8%3D&reserved=0> [téïcée sur facebook]<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.facebook.com%2Fteicee&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953658990273%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=CGyVnOOhKZfjgbTndArKyX%2FF1UDSio3Ytw41QZ37r7k%3D&reserved=0> [téïcée sur twitter] <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2FTeicee_fr&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659000270%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=VfrlX7%2BNEAgcZDFykiCNBD%2FKQmEXvcB0RWSF70csuNA%3D&reserved=0> [téïcée sur linkedin] <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Ft-c-e&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659000270%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2wWc1igq8AI2M0ooJIqCfexK3LvzTYchPkjLoa%2FXV%2Fs%3D&reserved=0> [téïcée sur viadeo] <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffr.viadeo.com%2Ffr%2Fcompany%2Fteicee&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659010264%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Ip3VWuiKxBj1rZlNIynC9fvYvs9ZkQvwNRqx49Ogl3w%3D&reserved=0> [Datadocké] To view the terms under which this email is distributed, please go to:- https://leedsbeckett.ac.uk/disclaimer/email<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fleedsbeckett.ac.uk%2Fdisclaimer%2Femail&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659010264%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=iDbrO2XcDbKdiy4KDNuxTwKBAmIU%2FT0utUf5ihapcE4%3D&reserved=0> -- [téïcée]<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.teicee.com%2F%3Fpk_campaign%3DEmail&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659020259%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qfc9z%2BdCcos4zoJ8uYfiDc7XpNRVq8C8e%2FFJsz1buSc%3D&reserved=0> Mathieu Valois Bureau Caen: Quartier Kœnig - 153, rue Géraldine MOCK - 14760 Bretteville-sur-Odon Bureau Vitré: Zone de la baratière - 12, route de Domalain - 35500 Vitré 02 72 34 13 20 | www.teicee.com<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.teicee.com%2F%3Fpk_campaign%3DEmail&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659020259%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qfc9z%2BdCcos4zoJ8uYfiDc7XpNRVq8C8e%2FFJsz1buSc%3D&reserved=0> [téïcée sur facebook]<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.facebook.com%2Fteicee&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659030247%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=1w8bOzBRGUFHHUaUlmOjdifVxeV3cWcYctcK7s%2Fwsxg%3D&reserved=0> [téïcée sur twitter] <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2FTeicee_fr&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659030247%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=q4YLFppwxfcPxWEhWCFQQ6H9zjWkFbeBVUOeX%2FVOnMg%3D&reserved=0> [téïcée sur linkedin] <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fcompany%2Ft-c-e&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659040248%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=MvJrxIwIl21blHxmtpylaBG2cmRERxVgB7csjaYskPc%3D&reserved=0> [téïcée sur viadeo] <https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Ffr.viadeo.com%2Ffr%2Fcompany%2Fteicee&data=04%7C01%7CP.Staniforth%40leedsbeckett.ac.uk%7Ca4fd6515a97c415edb6a08d972b73ff0%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C637666953659040248%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=CaPNKCGrsvv0xFS3SdgnL6YJIYighulaGAmbRCpvt3E%3D&reserved=0> [Datadocké] To view the terms under which this email is distributed, please go to:- https://leedsbeckett.ac.uk/disclaimer/email
participants (2)
-
Mathieu Valois
-
Staniforth, Paul