Storage Hardware for Hyperconverged oVirt: Gluster best practice
by ovirt@fateknollogee.com
oVirt + Gluster (hyperconverged) RAID question:
I have 3 nodes of SuperMicro hardware, each node has 1x SATADOM (boot
drive for o/s install) and 6x 1TB SSD (to be used for Gluster).
For the SSDs, is hardware or software RAID preferred or do I use an HBA?
The RedHat docs seem to suggest hardware RAID, others on the forum say
HBA or software RAID.
What are other folks using?
6 years, 11 months
live migrating vm from 4.1.7 to 4.1.9 failed because of vmx flag
by Gianluca Cecchi
Hello,
I'm trying to update an environment from 4.1.7 to 4.1.9.
Already migrated the engine (separate) and one host.
This host is now running a pair of VMs, that was powered off and then
powered on it via "run once" feature.
Now I'm trying to evacuate VMs from other hosts and get all to 4.1.9.
But I have tried with couple of VMs and I'm always getting error events of
type:
Jan 26, 2018 12:40:00 PM Migration failed (VM: dbatest3, Source: ov301,
Destination: ov200).
In engine.log
2018-01-26 12:39:48,267+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-8)
[72ccfbb1-735a-4411-a990-cdb2c081e391] Lock Acquired to object
'EngineLock:{exclusiveLocks='[4f6ecae2-7d71-47c9-af23-2b3e49bc08fc=VM]',
sharedLocks=''}'
2018-01-26 12:39:48,350+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
Running command: MigrateVmToServerCommand internal: false. Entities
affected : ID: 4f6ecae2-7d71-47c9-af23-2b3e49bc08fc Type: VMAction group
MIGRATE_VM with role type USER
2018-01-26 12:39:48,424+01 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true',
hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc', srcHost='ov301.mydomain',
dstVdsId='d16e723c-b44c-4c1c-be76-c67911e47ccd',
dstHost='ov200.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='500',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 7013a234
2018-01-26 12:39:48,425+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
START, MigrateBrokerVDSCommand(HostName = ov301,
MigrateVDSCommandParameters:{runAsync='true',
hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc', srcHost='ov301.mydomain',
dstVdsId='d16e723c-b44c-4c1c-be76-c67911e47ccd',
dstHost='ov200.mydomain:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='500',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]'}), log id: 25d9c017
2018-01-26 12:39:49,620+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
FINISH, MigrateBrokerVDSCommand, log id: 25d9c017
2018-01-26 12:39:49,622+01 INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 7013a234
2018-01-26 12:39:49,627+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-8) [72ccfbb1-735a-4411-a990-cdb2c081e391]
EVENT_ID: VM_MIGRATION_START(62), Correlation ID:
72ccfbb1-735a-4411-a990-cdb2c081e391, Job ID:
b2f39d2c-87f1-480c-b4c7-b8ab09d09318, Call Stack: null, Custom ID: null,
Custom Event ID: -1, Message: Migration started (VM: dbatest3, Source:
ov301, Destination: ov200, User: admin@internal-authz).
2018-01-26 12:39:50,782+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler1) [f2fc61e] Fetched 3 VMs from VDS
'd16e723c-b44c-4c1c-be76-c67911e47ccd'
2018-01-26 12:39:50,783+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler1) [f2fc61e] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) was unexpectedly detected
as 'MigratingTo' on VDS 'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200)
(expected on '8ef1ce6f-4e38-486c-b3a4-58235f1f1d06')
2018-01-26 12:39:50,784+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler1) [f2fc61e] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc' is migrating to VDS
'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200) ignoring it in the refresh
until migration is done
2018-01-26 12:39:51,968+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-15) [] VM '4f6ecae2-7d71-47c9-af23-2b3e49bc08fc' was
reported as Down on VDS 'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200)
2018-01-26 12:39:51,969+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-15) [] START, DestroyVDSCommand(HostName = ov200,
DestroyVmVDSCommandParameters:{runAsync='true',
hostId='d16e723c-b44c-4c1c-be76-c67911e47ccd',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc', force='false',
secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log
id: 3b49afe3
2018-01-26 12:39:52,236+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-15) [] Failed to destroy VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc' because VM does not exist, ignoring
2018-01-26 12:39:52,237+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-15) [] FINISH, DestroyVDSCommand, log id: 3b49afe3
2018-01-26 12:39:52,237+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-15) [] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) was unexpectedly detected
as 'Down' on VDS 'd16e723c-b44c-4c1c-be76-c67911e47ccd'(ov200) (expected on
'8ef1ce6f-4e38-486c-b3a4-58235f1f1d06')
2018-01-26 12:40:00,237+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler3) [5c4b079b] VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) moved from 'MigratingFrom'
--> 'Up'
2018-01-26 12:40:00,237+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler3) [5c4b079b] Adding VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'(dbatest3) to re-run list
2018-01-26 12:40:00,245+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(DefaultQuartzScheduler3) [5c4b079b] Rerun VM
'4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'. Called from VDS 'ov301'
2018-01-26 12:40:00,287+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] START,
MigrateStatusVDSCommand(HostName = ov301,
MigrateStatusVDSCommandParameters:{runAsync='true',
hostId='8ef1ce6f-4e38-486c-b3a4-58235f1f1d06',
vmId='4f6ecae2-7d71-47c9-af23-2b3e49bc08fc'}), log id: 65677081
2018-01-26 12:40:00,818+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] FINISH,
MigrateStatusVDSCommand, log id: 65677081
2018-01-26 12:40:00,823+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Correlation ID:
72ccfbb1-735a-4411-a990-cdb2c081e391, Job ID:
b2f39d2c-87f1-480c-b4c7-b8ab09d09318, Call Stack: null, Custom ID: null,
Custom Event ID: -1, Message: Migration failed (VM: dbatest3, Source:
ov301, Destination: ov200).
2018-01-26 12:40:00,825+01 INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(org.ovirt.thread.pool-6-thread-15) [5c4b079b] Lock freed to object
'EngineLock:{exclusiveLocks='[4f6ecae2-7d71-47c9-af23-2b3e49bc08fc=VM]',
sharedLocks=''}'
2018-01-26 12:40:05,796+01 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler1) [27a52eca] Fetched 2 VMs from VDS
'd16e723c-b44c-4c1c-be76-c67911e47ccd'
I don't see errors in vdsm logs of target host, but I do see this in
/var/log/messages of target host
Jan 26 12:39:51 ov200 libvirtd: 2018-01-26 11:39:51.179+0000: 2588: error :
virCPUx86UpdateLive:2726 : operation failed: guest CPU doesn't match
specification: missing features: vmx
Indeed dbatest3 VM has this "vmx=on" flag in qemu-kvm command line on
source host, even if I never configured it explicitly...:
[root@ov301 ~]# ps -ef|grep dbatest3
qemu 1455 1 1 2017 ? 08:41:29 /usr/libexec/qemu-kvm -name
guest=dbatest3,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-27-dbatest3/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Westmere,vmx=on -m 32768 ...
A quite similar VM named dbatest4 running on the already upgraded host
doesn't have indeed the flag:
[root@ov200 vdsm]# ps -ef|grep dbatest4
qemu 15827 1 3 10:47 ? 00:04:37 /usr/libexec/qemu-kvm -name
guest=dbatest4,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-dbatest4/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu
Westmere -m 32768
The 2 VMs has been created from the same template and with the same options
if I remember correctly and across the previous versions I could live
migrate without problems....
What can have changed?
This would be a possible great show stopper, because one has to power off
all the VMs if he/she wants to upgrade.....
Thanks in advance for any help,
Gianluca
6 years, 11 months
bonding mode-alb
by Demeter Tibor
--=_a87e03e3-4ae4-4dcb-8739-e4423d90cef2
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear members,
I would like to use two switches for make high-availability network connection for my nfs storge.
Unfortunately, these switches does not support 802.3.ad lacp, (really I can't stack them) but I've read about mode-alb and mode-tlb bonding modes.
I know,these modes are available in ovirt, but how is work that? Also how is safe? Are there for HA or for load balance?
I've read some forums, where does not recommended these modes to use in ovirt. What is the truths?
I would like to use only for storage-traffic, it will be separated from other network traffic. I have two 10Gbe switches and two 10Gbe ports in my nodes.
Thanks in advance,
R
Tibor
--=_a87e03e3-4ae4-4dcb-8739-e4423d90cef2
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: arial, helvetica, sans-serif; font-s=
ize: 12pt; color: #000000"><div>Dear members,</div><div><br data-mce-bogus=
=3D"1"></div><div>I would like to use two switches for make high-availabili=
ty network connection for my nfs storge.</div><div>Unfortunately, these swi=
tches does not support 802.3.ad lacp, (really I can't stack them) but=
I've read about mode-alb and mode-tlb bonding modes. <br>I know=
,these modes are available in ovirt, but how is work that? Also how is safe=
? Are there for HA or for load balance? </div><div><br data-mce-bogus=
=3D"1"></div><div>I've read some forums, where does not recommended these m=
odes to use in ovirt. What is the truths? </div><div>I would like to u=
se only for storage-traffic, it will be separated from other network traffi=
c. I have two 10Gbe switches and two 10Gbe ports in my nodes. </div><d=
iv><br data-mce-bogus=3D"1"></div><div>Thanks in advance, </div><div><=
br data-mce-bogus=3D"1"></div><div>R</div><div><br data-mce-bogus=3D"1"></d=
iv><div>Tibor</div><div> </div><div><br></div><div data-marker=3D"__SI=
G_PRE__"><p style=3D"font-family: 'Times New Roman'; font-size: medium; mar=
gin: 0px;"><strong><span style=3D"font-size: medium;"><span style=3D"color:=
rgb(45, 103, 176);"></span></span></strong></p><p></p></div></div></body><=
/html>
--=_a87e03e3-4ae4-4dcb-8739-e4423d90cef2--
6 years, 11 months
Re: [ovirt-users] oVirt 4.1.9 and Spectre-Meltdown checks
by Arman Khalatyan
you should download microcode from the intel web page and overwrite the
/lib/firmware/intel-ucode or so...please check the readme.
Am 26.01.2018 10:50 vorm. schrieb "Gianluca Cecchi" <
gianluca.cecchi(a)gmail.com>:
Hello,
nice to see integration of Spectre-Meltdown info in 4.1.9, both for guests
and hosts, as detailed in release notes:
I have upgraded my CentOS 7.4 engine VM (outside of oVirt cluster) and one
oVirt host to 4.1.9.
Now in General -> Software subtab of the host I see:
OS Version: RHEL - 7 - 4.1708.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 693.17.1.el7.x86_64
Kernel Features: IBRS: 0, PTI: 1, IBPB: 0
Am I supposed to manually set any particular value?
If I run version 0.32 (updated yesterday) of spectre-meltdown-checker.sh I
got this on my Dell M610 blade with
Version: 6.4.0
Release Date: 07/18/2013
[root@ov200 ~]# /home/g.cecchi/spectre-meltdown-checker.sh
Spectre and Meltdown mitigation detection tool v0.32
Checking for vulnerabilities on current system
Kernel is Linux 3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC
2018 x86_64
CPU is Intel(R) Xeon(R) CPU X5690 @ 3.47GHz
Hardware check
* Hardware support (CPU microcode) for mitigation techniques
* Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available: NO
* CPU indicates IBRS capability: NO
* Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available: NO
* CPU indicates IBPB capability: NO
* Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available: NO
* CPU indicates STIBP capability: NO
* Enhanced IBRS (IBRS_ALL)
* CPU indicates ARCH_CAPABILITIES MSR availability: NO
* ARCH_CAPABILITIES MSR advertises IBRS_ALL capability: NO
* CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
NO
* CPU vulnerability to the three speculative execution attacks variants
* Vulnerable to Variant 1: YES
* Vulnerable to Variant 2: YES
* Vulnerable to Variant 3: YES
CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Checking count of LFENCE opcodes in kernel: YES
> STATUS: NOT VULNERABLE (107 opcodes found, which is >= 70, heuristic to
be improved when official patches become available)
CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigation 1
* Kernel is compiled with IBRS/IBPB support: YES
* Currently enabled features
* IBRS enabled for Kernel space: NO (echo 1 >
/sys/kernel/debug/x86/ibrs_enabled)
* IBRS enabled for User space: NO (echo 2 >
/sys/kernel/debug/x86/ibrs_enabled)
* IBPB enabled: NO (echo 1 > /sys/kernel/debug/x86/ibpb_enabled)
* Mitigation 2
* Kernel compiled with retpoline option: NO
* Kernel compiled with a retpoline-aware compiler: NO
* Retpoline enabled: NO
> STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with
retpoline are needed to mitigate the vulnerability)
CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
* Running as a Xen PV DomU: NO
> STATUS: NOT VULNERABLE (PTI mitigates the vulnerability)
A false sense of security is worse than no security at all, see --disclaimer
[root@ov200 ~]#
So it seems I'm still vulnerable only to Variant 2, but kernel seems ok:
* Kernel is compiled with IBRS/IBPB support: YES
while bios not, correct?
Is RH EL / CentOS expected to follow the retpoline option too, to mitigate
Variant 2, as done by Fedora for example?
Eg on my just updated Fedora 27 laptop I get now:
[g.cecchi@ope46 spectre_meltdown]$ sudo ./spectre-meltdown-checker.sh
[sudo] password for g.cecchi:
Spectre and Meltdown mitigation detection tool v0.32
Checking for vulnerabilities on current system
Kernel is Linux 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 2018
x86_64
CPU is Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz
Hardware check
* Hardware support (CPU microcode) for mitigation techniques
* Indirect Branch Restricted Speculation (IBRS)
* SPEC_CTRL MSR is available: NO
* CPU indicates IBRS capability: NO
* Indirect Branch Prediction Barrier (IBPB)
* PRED_CMD MSR is available: NO
* CPU indicates IBPB capability: NO
* Single Thread Indirect Branch Predictors (STIBP)
* SPEC_CTRL MSR is available: NO
* CPU indicates STIBP capability: NO
* Enhanced IBRS (IBRS_ALL)
* CPU indicates ARCH_CAPABILITIES MSR availability: NO
* ARCH_CAPABILITIES MSR advertises IBRS_ALL capability: NO
* CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):
NO
* CPU vulnerability to the three speculative execution attacks variants
* Vulnerable to Variant 1: YES
* Vulnerable to Variant 2: YES
* Vulnerable to Variant 3: YES
CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
* Mitigated according to the /sys interface: NO (kernel confirms your
system is vulnerable)
> STATUS: VULNERABLE (Vulnerable)
CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
* Mitigated according to the /sys interface: YES (kernel confirms that
the mitigation is active)
* Mitigation 1
* Kernel is compiled with IBRS/IBPB support: NO
* Currently enabled features
* IBRS enabled for Kernel space: NO
* IBRS enabled for User space: NO
* IBPB enabled: NO
* Mitigation 2
* Kernel compiled with retpoline option: YES
* Kernel compiled with a retpoline-aware compiler: YES (kernel reports
full retpoline compilation)
* Retpoline enabled: YES
> STATUS: NOT VULNERABLE (Mitigation: Full generic retpoline)
CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
* Mitigated according to the /sys interface: YES (kernel confirms that
the mitigation is active)
* Kernel supports Page Table Isolation (PTI): YES
* PTI enabled and active: YES
* Running as a Xen PV DomU: NO
> STATUS: NOT VULNERABLE (Mitigation: PTI)
A false sense of security is worse than no security at all, see --disclaimer
[g.cecchi@ope46 spectre_meltdown]$
BTW: I updated some days ago this laptop from F26 to F27 and I remember
Variant 1 was fixed in F26, while now I see it as vulnerable..... I'm going
to check with Fedora mailing list about this...
Another question: what should I see for a VM instead related to
meltdown/spectre?
Currently in "Guest CPU Type" in General subtab of the VM I only see
"Westmere"..
Should I also see anythin aout IBRS, etc...?
Thanks,
Gianluca
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
6 years, 11 months
Ovirt 4.2, failing to connect to VDSM.
by CRiMSON
I did a quick upgrade this afternoon on a dev machine.
Jan 25 11:57:07 Updated: glusterfs-libs-3.12.5-2.el7.x86_64
Jan 25 11:57:08 Updated: glusterfs-client-xlators-3.12.5-2.el7.x86_64
Jan 25 11:57:08 Updated: glusterfs-3.12.5-2.el7.x86_64
Jan 25 11:57:09 Updated: kernel-ml-tools-libs-4.14.15-1.el7.elrepo.x86_64
Jan 25 11:57:09 Updated: kernel-ml-tools-4.14.15-1.el7.elrepo.x86_64
Jan 25 11:57:10 Updated: glusterfs-api-3.12.5-2.el7.x86_64
Jan 25 11:57:10 Updated: glusterfs-fuse-3.12.5-2.el7.x86_64
Jan 25 11:57:10 Updated: glusterfs-cli-3.12.5-2.el7.x86_64
Jan 25 11:57:11 Updated: python-perf-4.14.15-1.el7.elrepo.x86_64
Jan 25 11:57:37 Installed: kernel-ml-devel-4.14.15-1.el7.elrepo.x86_64
Jan 25 11:57:39 Updated: kernel-ml-headers-4.14.15-1.el7.elrepo.x86_64
Jan 25 11:57:52 Installed: kernel-ml-4.14.15-1.el7.elrepo.x86_64
Jan 25 11:57:52 Updated:
rubygem-fluent-plugin-viaq_data_model-0.0.13-1.el7.noarch
This is all that was upgraded.
But now my storage domains are failing to come up and the host keeps saying
it's getting a connection refused. It's all on 1 host.
In mom.log I see.
2018-01-25 17:10:49,929 - mom - INFO - MOM starting
2018-01-25 17:10:49,955 - mom.HostMonitor - INFO - Host Monitor starting
2018-01-25 17:10:49,955 - mom - INFO - hypervisor interface vdsmjsonrpcbulk
2018-01-25 17:10:50,013 - mom.vdsmInterface - ERROR - Cannot connect to
VDSM! [Errno 111] Connection refused
2018-01-25 17:10:50,013 - mom - ERROR - Failed to initialize MOM threads
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/mom/__init__.py", line 29, in run
hypervisor_iface = self.get_hypervisor_interface()
File "/usr/lib/python2.7/site-packages/mom/__init__.py", line 217, in
get_hypervisor_interface
return module.instance(self.config)
File
"/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/vdsmjsonrpcbulkInterface.py",
line 47, in instance
return JsonRpcVdsmBulkInterface()
File
"/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/vdsmjsonrpcbulkInterface.py",
line 29, in __init__
super(JsonRpcVdsmBulkInterface, self).__init__()
File
"/usr/lib/python2.7/site-packages/mom/HypervisorInterfaces/vdsmjsonrpcInterface.py",
line 43, in __init__
.orRaise(RuntimeError, 'No connection to VDSM.')
File "/usr/lib/python2.7/site-packages/mom/optional.py", line 28, in
orRaise
raise exception(*args, **kwargs)
RuntimeError: No connection to VDSM.
[root@lv426 vdsm]#
My vdsm.log is 0 byes (nothing being logged?)
In my engine.log I'm seeing:
2018-01-25 17:12:11,027-05 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to lv426.dasgeekhaus.org/127.0.0.1
2018-01-25 17:12:11,028-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-99) [] Command
'GetCapabilitiesVDSCommand(HostName = lv426,
VdsIdAndVdsVDSCommandParametersBase:{hostId='a645af84-3da1-45ed-bab5-2af66b5924dd',
vds='Host[lv426,a645af84-3da1-45ed-bab5-2af66b5924dd]'})' execution failed:
java.net.ConnectException: Connection refused
2018-01-25 17:12:11,028-05 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedThreadFactory-engineScheduled-Thread-99) [] Failure to refresh
host 'lv426' runtime info: java.net.ConnectException: Connection refused
2018-01-25 17:12:13,517-05 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)
[] Connecting to lv426.dasgeekhaus.org/127.0.0.1
2018-01-25 17:12:13,517-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-25) [] Command
'GetAllVmStatsVDSCommand(HostName = lv426,
VdsIdVDSCommandParametersBase:{hostId='a645af84-3da1-45ed-bab5-2af66b5924dd'})'
execution failed: java.net.ConnectException: Connection refused
2018-01-25 17:12:13,518-05 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher]
(EE-ManagedThreadFactory-engineScheduled-Thread-25) [] Failed to fetch vms
info for host 'lv426' - skipping VMs monitoring.
All of my network interfaces do come up ok, iptables is turned off so it's
not getting in the way.
I'm at a complete loss right now as to what to look at.
6 years, 11 months
Ovirt 4.2 - Help adding VM to numa node via python SDK
by Don Dupuis
I am able to create a vm using the sdk with nic and disks using the python
sdk, but having trouble understanding how to assign it to virtual numanode
onto the physical numanode via python sdk. Any help in this area would be
greatly appreciated
Thanks
Don
6 years, 11 months
4.2 Cloud-init & VM Portal question
by Vrgotic, Marko
--_000_17440597BD9049178A1771E526A26F32activevideocom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBvVmlydCwNCg0KSSBoYXZlIGNyZWF0ZWQgYSB0ZW1wbGF0ZSB3aGljaCBpbmNsdWRlcyAg
Y2xvdWQtaW5pdCB3aXRoIHVzZXIgLyB0aW1lem9uZSAvIHNzaCBrZXkgLyBuZXR3b3JrIGRlZmlu
ZWQuIEludGVudGlvbiBpcyB0byBhbGxvdyByZWd1bGFyIHVzZXJzICYgVk0gUG9ydGFsIHRvIGNy
ZWF0ZSBWTXMgdXNpbmcgdGhpcyB0ZW1wbGF0ZS4NCg0KUXVlc3Rpb24gdGhhdCBJIGhhdmUgaXM7
IGlmIHBvc3NpYmxlLCBob3cgY2FuIEkgYXJyYW5nZSB0aGF0IHZtIG5hbWUgSSBmaWxsIGluLCBp
cyBwYXNzZWQgdG8gY2xvdWQtaW5pdCBhcyB2bSBob3N0bmFtZSAoYXMgaXQgaXMgZGVmYXVsdCB3
aGVuIGNyZWF0aW5nIFZNIGZyb20gQWRtaW4gUG9ydGFsKT8gSXMgaXQgZXZlbiBwb3NzaWJsZSBh
bmQgcGxlYXNlIGlmIHNvLCBwcm92aWRlIHNvbWUgZ3VpZGFuY2UuDQoNCi0tDQpNZXQgdnJpZW5k
ZWxpamtlIGdyb2V0IC8gQmVzdCByZWdhcmRzLA0KTWFya28gVnJnb3RpYw0KU3lzdGVtIEVuZ2lu
ZWVyL0N1c3RvbWVyIENhcmUNCkFjdGl2ZVZpZGVvDQoNCg==
--_000_17440597BD9049178A1771E526A26F32activevideocom_
Content-Type: text/html; charset="utf-8"
Content-ID: <2B53B85BE18C0E4DB54AB4EAF1D1E03D(a)namprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWls
eTotd2Via2l0LXN0YW5kYXJkO30NCi8qIFN0eWxlIERlZmluaXRpb25zICovDQpwLk1zb05vcm1h
bCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJe21hcmdpbjowY207DQoJbWFyZ2luLWJv
dHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMi4wcHQ7DQoJZm9udC1mYW1pbHk6Q2FsaWJyaTsN
Cgltc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1VUzt9DQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5r
DQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjojMDU2M0MxOw0KCXRleHQtZGVjb3Jh
dGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJ
e21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjojOTU0RjcyOw0KCXRleHQtZGVjb3JhdGlv
bjp1bmRlcmxpbmU7fQ0Kc3Bhbi5FbWFpbFN0eWxlMTcNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29u
YWwtY29tcG9zZTsNCglmb250LWZhbWlseTpDYWxpYnJpOw0KCWNvbG9yOndpbmRvd3RleHQ7fQ0K
c3Bhbi5tc29JbnMNCgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0LW9ubHk7DQoJbXNvLXN0eWxlLW5h
bWU6IiI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTsNCgljb2xvcjp0ZWFsO30NCi5Nc29D
aHBEZWZhdWx0DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtZmFtaWx5OkNh
bGlicmk7DQoJbXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4tVVM7fQ0KQHBhZ2UgV29yZFNlY3Rpb24x
DQoJe3NpemU6NTk1LjBwdCA4NDIuMHB0Ow0KCW1hcmdpbjo3Mi4wcHQgNzIuMHB0IDcyLjBwdCA3
Mi4wcHQ7fQ0KZGl2LldvcmRTZWN0aW9uMQ0KCXtwYWdlOldvcmRTZWN0aW9uMTt9DQotLT48L3N0
eWxlPg0KPC9oZWFkPg0KPGJvZHkgYmdjb2xvcj0id2hpdGUiIGxhbmc9IkVOLUdCIiBsaW5rPSIj
MDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPkRlYXIgb1Zp
cnQsPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMS4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5JIGhhdmUgY3Jl
YXRlZCBhIHRlbXBsYXRlIHdoaWNoIGluY2x1ZGVzICZuYnNwO2Nsb3VkLWluaXQgd2l0aCB1c2Vy
IC8gdGltZXpvbmUgLyBzc2gga2V5IC8gbmV0d29yayBkZWZpbmVkLiBJbnRlbnRpb24gaXMgdG8g
YWxsb3cgcmVndWxhciB1c2VycyAmYW1wOyBWTSBQb3J0YWwgdG8gY3JlYXRlIFZNcyB1c2luZyB0
aGlzIHRlbXBsYXRlLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48
L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+
UXVlc3Rpb24gdGhhdCBJIGhhdmUgaXM7IGlmIHBvc3NpYmxlLCBob3cgY2FuIEkgYXJyYW5nZSB0
aGF0IHZtIG5hbWUgSSBmaWxsIGluLCBpcyBwYXNzZWQgdG8gY2xvdWQtaW5pdCBhcyB2bSBob3N0
bmFtZSAoYXMgaXQgaXMgZGVmYXVsdCB3aGVuIGNyZWF0aW5nIFZNIGZyb20gQWRtaW4gUG9ydGFs
KT8gSXMgaXQgZXZlbiBwb3NzaWJsZSBhbmQgcGxlYXNlDQogaWYgc28sIHByb3ZpZGUgc29tZSBn
dWlkYW5jZS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3Bh
biBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0K
PHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iTkwiIHN0eWxlPSJmb250LXNpemU6MTAu
NXB0O2ZvbnQtZmFtaWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNvLWZhcmVhc3Qt
bGFuZ3VhZ2U6RU4tR0IiPi0tPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05v
cm1hbCI+PHNwYW4gbGFuZz0iTkwiIHN0eWxlPSJmb250LXNpemU6MTAuNXB0O2ZvbnQtZmFtaWx5
Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNvLWZhcmVhc3QtbGFuZ3VhZ2U6RU4tR0Ii
Pk1ldCB2cmllbmRlbGlqa2UgZ3JvZXQgLyBCZXN0IHJlZ2FyZHMsPG86cD48L286cD48L3NwYW4+
PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiIHN0eWxlPSJmb250
LXNpemU6MTAuNXB0O2ZvbnQtZmFtaWx5Oi13ZWJraXQtc3RhbmRhcmQ7Y29sb3I6YmxhY2s7bXNv
LWZhcmVhc3QtbGFuZ3VhZ2U6RU4tR0IiPk1hcmtvIFZyZ290aWM8bzpwPjwvbzpwPjwvc3Bhbj48
L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyIgc3R5bGU9ImZvbnQt
c2l6ZToxMC41cHQ7Zm9udC1mYW1pbHk6LXdlYmtpdC1zdGFuZGFyZDtjb2xvcjpibGFjazttc28t
ZmFyZWFzdC1sYW5ndWFnZTpFTi1HQiI+U3lzdGVtIEVuZ2luZWVyL0N1c3RvbWVyIENhcmU8bzpw
PjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1V
UyIgc3R5bGU9ImZvbnQtc2l6ZToxMC41cHQ7Zm9udC1mYW1pbHk6LXdlYmtpdC1zdGFuZGFyZDtj
b2xvcjpibGFjazttc28tZmFyZWFzdC1sYW5ndWFnZTpFTi1HQiI+QWN0aXZlVmlkZW88bzpwPjwv
bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwv
cD4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_17440597BD9049178A1771E526A26F32activevideocom_--
6 years, 11 months
what does the metadata param "LEGALITY", "VOLTYPE" and "GEN" means?
by pengyixiang
------=_Part_119963_1780357530.1516780114893
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: base64
aGVsbG8sIGV2ZXJ5b25lIQogICAgVGhlcmUncyBhIGxpc3Qgc3RvcmFnZSBtZXRhZGF0YSBmb3Jt
YXQgaW4gWzFdLCAgd2hhdCBkb2VzIHRoZSAiRkFLRSIgLCAiSU5URVJOQUx8U0hBUkVEfExFQUYi
LCAiR0VOIiBtZWFucyBpbiBvdmlydD8gSG93IGRpZCB0aGVzZSB1c2VkPwoKCgoKCgpbMV0KIyBI
ZXJlIGlzIHRoZSB3b3JzdCBjYXNlIG1ldGFkYXRhIGZvcm1hdDoKIwojIENUSU1FPTE0NDA5MzUw
MzggICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBpbnQodGltZS50aW1lKCkpCiMgREVTQ1JJ
UFRJT049ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjIHRleHR8SlNPTgojIERJU0tU
WVBFPTIgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBlbnVtCiMgRE9NQUlOPTc1
ZjhhMWJiLTQ1MDQtNDMxNC05MWNhLWQ5MzY1YTMwNjkyYiAjIHV1aWQKIyBGT1JNQVQ9Q09XICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICMgUkFXfENPVwojIElNQUdFPTc1ZjhhMWJi
LTQ1MDQtNDMxNC05MWNhLWQ5MzY1YTMwNjkyYiAgIyB1dWlkCiMgTEVHQUxJVFk9SUxMRUdBTCAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAjIElMTEVHQUx8TEVHQUx8RkFLRQojIE1USU1FPTAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBhbHdheXMgMAojIFBPT0xfVVVJ
RD0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBhbHdheXMgZW1wdHkKIyBQVVVJ
RD03NWY4YTFiYi00NTA0LTQzMTQtOTFjYS1kOTM2NWEzMDY5MmIgICMgdXVpZAojIFNJWkU9MjE0
NzQ4MzY0OCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBzaXplIGluIGJsb2NrcwojIFRZ
UEU9UFJFQUxMT0NBVEVEICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBQUkVBTExPQ0FURUR8
VU5LTk9XTnxTUEFSU0UKIyBWT0xUWVBFPUlOVEVSTkFMICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICMgSU5URVJOQUx8U0hBUkVEfExFQUYKIyBHRU49OTk5ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICMgaW50CiMgRU9G
------=_Part_119963_1780357530.1516780114893
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: base64
PGRpdiBzdHlsZT0ibGluZS1oZWlnaHQ6MS43O2NvbG9yOiMwMDAwMDA7Zm9udC1zaXplOjE0cHg7
Zm9udC1mYW1pbHk6QXJpYWwiPjxkaXY+aGVsbG8sIGV2ZXJ5b25lITwvZGl2PjxkaXY+Jm5ic3A7
ICZuYnNwOyBUaGVyZSdzIGEgbGlzdCBzdG9yYWdlIG1ldGFkYXRhIGZvcm1hdCBpbiBbMV0sJm5i
c3A7IHdoYXQgZG9lcyB0aGUgIjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDEyOCwgMTI4LCAxMjgp
OyBmb250LXN0eWxlOiBpdGFsaWM7IGZvbnQtZmFtaWx5OiAmcXVvdDtEZWphVnUgU2FucyBNb25v
JnF1b3Q7OyBmb250LXNpemU6IDExLjNwdDsgd2hpdGUtc3BhY2U6IHByZS13cmFwOyI+RkFLRTwv
c3Bhbj4iICwgIjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDEyOCwgMTI4LCAxMjgpOyBmb250LXN0
eWxlOiBpdGFsaWM7IGZvbnQtZmFtaWx5OiAmcXVvdDtEZWphVnUgU2FucyBNb25vJnF1b3Q7OyBm
b250LXNpemU6IDExLjNwdDsgd2hpdGUtc3BhY2U6IHByZS13cmFwOyI+SU5URVJOQUx8U0hBUkVE
fExFQUY8L3NwYW4+IiwgIjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDEyOCwgMTI4LCAxMjgpOyBm
b250LXN0eWxlOiBpdGFsaWM7IGZvbnQtZmFtaWx5OiAmcXVvdDtEZWphVnUgU2FucyBNb25vJnF1
b3Q7OyBmb250LXNpemU6IDExLjNwdDsgd2hpdGUtc3BhY2U6IHByZS13cmFwOyI+R0VOPC9zcGFu
PiIgbWVhbnMgaW4gb3ZpcnQ/IEhvdyBkaWQgdGhlc2UgdXNlZD88L2Rpdj48ZGl2Pjxicj48L2Rp
dj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PlsxXTwvZGl2PjxkaXY+PHByZSBz
dHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O0RlamFWdSBTYW5zIE1vbm8mcXVvdDs7IGZvbnQtc2l6
ZTogMTEuM3B0OyI+PHNwYW4gc3R5bGU9ImNvbG9yOiM4MDgwODA7Zm9udC1zdHlsZTppdGFsaWM7
Ij4jIEhlcmUgaXMgdGhlIHdvcnN0IGNhc2UgbWV0YWRhdGEgZm9ybWF0Ojxicj48L3NwYW4+PHNw
YW4gc3R5bGU9ImNvbG9yOiM4MDgwODA7Zm9udC1zdHlsZTppdGFsaWM7Ij4jPGJyPjwvc3Bhbj48
c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsiPiMgQ1RJTUU9MTQ0
MDkzNTAzOCAgICAgICAgICAgICAgICAgICAgICAgICAgICAjIGludCh0aW1lLnRpbWUoKSk8YnI+
PC9zcGFuPjxzcGFuIHN0eWxlPSJjb2xvcjojODA4MDgwO2ZvbnQtc3R5bGU6aXRhbGljOyI+IyBE
RVNDUklQVElPTj0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICMgdGV4dHxKU09OPGJy
Pjwvc3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsiPiMg
RElTS1RZUEU9MiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjIGVudW08YnI+PC9z
cGFuPjxzcGFuIHN0eWxlPSJjb2xvcjojODA4MDgwO2ZvbnQtc3R5bGU6aXRhbGljOyI+IyBET01B
SU49NzVmOGExYmItNDUwNC00MzE0LTkxY2EtZDkzNjVhMzA2OTJiICMgdXVpZDxicj48L3NwYW4+
PHNwYW4gc3R5bGU9ImNvbG9yOiM4MDgwODA7Zm9udC1zdHlsZTppdGFsaWM7Ij4jIEZPUk1BVD1D
T1cgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIyBSQVd8Q09XPGJyPjwvc3Bhbj48
c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsiPiMgSU1BR0U9NzVm
OGExYmItNDUwNC00MzE0LTkxY2EtZDkzNjVhMzA2OTJiICAjIHV1aWQ8YnI+PC9zcGFuPjxzcGFu
IHN0eWxlPSJjb2xvcjojODA4MDgwO2ZvbnQtc3R5bGU6aXRhbGljOyI+IyBMRUdBTElUWT1JTExF
R0FMICAgICAgICAgICAgICAgICAgICAgICAgICAgICMgSUxMRUdBTHxMRUdBTHxGQUtFPGJyPjwv
c3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsiPiMgTVRJ
TUU9MCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjIGFsd2F5cyAwPGJyPjwv
c3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsiPiMgUE9P
TF9VVUlEPSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjIGFsd2F5cyBlbXB0eTxi
cj48L3NwYW4+PHNwYW4gc3R5bGU9ImNvbG9yOiM4MDgwODA7Zm9udC1zdHlsZTppdGFsaWM7Ij4j
IFBVVUlEPTc1ZjhhMWJiLTQ1MDQtNDMxNC05MWNhLWQ5MzY1YTMwNjkyYiAgIyB1dWlkPGJyPjwv
c3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsiPiMgU0la
RT0yMTQ3NDgzNjQ4ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAjIHNpemUgaW4gYmxvY2tz
PGJyPjwvc3Bhbj48c3BhbiBzdHlsZT0iY29sb3I6IzgwODA4MDtmb250LXN0eWxlOml0YWxpYzsi
PiMgVFlQRT1QUkVBTExPQ0FURUQgICAgICAgICAgICAgICAgICAgICAgICAgICAjIFBSRUFMTE9D
QVRFRHxVTktOT1dOfFNQQVJTRTxicj48L3NwYW4+PHNwYW4gc3R5bGU9ImNvbG9yOiM4MDgwODA7
Zm9udC1zdHlsZTppdGFsaWM7Ij4jIFZPTFRZUEU9SU5URVJOQUwgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIyBJTlRFUk5BTHxTSEFSRUR8TEVBRjxicj48L3NwYW4+PHNwYW4gc3R5bGU9ImNv
bG9yOiM4MDgwODA7Zm9udC1zdHlsZTppdGFsaWM7Ij4jIEdFTj05OTkgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIyBpbnQ8YnI+PC9zcGFuPjxzcGFuIHN0eWxlPSJjb2xvcjoj
ODA4MDgwO2ZvbnQtc3R5bGU6aXRhbGljOyI+IyBFT0Y8L3NwYW4+PC9wcmU+PC9kaXY+PC9kaXY+
PGJyPjxicj48c3BhbiB0aXRsZT0ibmV0ZWFzZWZvb3RlciI+PHA+Jm5ic3A7PC9wPjwvc3Bhbj4=
------=_Part_119963_1780357530.1516780114893--
6 years, 11 months