oVirt and Gluster Hyperconverged - VM's Bad volume specification
by thomas.rockey@datasphere.com
I am new to this forum so I apologize before hand if I don’t present the right content you are looking for or miss content you need.
Background:
By no means am I an expert with Ovirt and glusterfs. That said I have been using, managing, building out Ovirt (single hosts) and oVirt with Gluster Hyperconverged environments for 5 years or more.
I started building out Ovirt environments with oVirt Engine Version: 3.6.7.5-1.el6 and earlier and now I’m using the latest oVirt with Gluster Hyperconverged.
Current hardware and software layout:
For the last 8 months I have been using a oVirt with Gluster Hyperconverged to host in total about 100 VM’s.
My hardware layout in one environment is 5 Dell R410 3 of them configured with Gluster Hyperconverged and the other 2 are just added hosts. Below is a detailed list!
Manufacturer: Dell Inc. PowerEdge R410
CPU Model Name: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
CPU Cores per Socket: 4
CPU Type: Intel Westmere IBRS SSBD Family
Dell PERC H700
4 SAS Seagate 4 TB drives 7.2k
2 one gig links – NIC 1 for frontend and NIC 2 for gluster backend
My software layout is:
OS Version: RHEL - 7 - 7.1908.0.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 1062.9.1.el7.x86_64
KVM Version: 2.12.0 - 33.1.el7_7.4
LIBVIRT Version: libvirt-4.5.0-23.el7_7.3
VDSM Version: vdsm-4.30.38-1.el7
SPICE Version: 0.14.0 - 7.el7
GlusterFS Version: glusterfs-6.6-1.el7
CEPH Version: librbd1-10.2.5-4.el7
Open vSwitch Version: openvswitch-2.11.0-4.el7
Kernel Features: PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption: Disabled
My network layout is:
3 HP 3800-48G-4SFP+ Switch (J9576A) running FULL MESH
Issue/timeline:
• All 3 of the HP 3800 were rebooted at the same time and were down for 5 to 10 seconds before they came back up (meaning pingable and responsive).
• A little more than 85% (36 or so) of the VM’s I had running all went into a pause state, do to and unknow storage error.
• The gluster volume heal state went all the way up to 2300 on vmstore (OS data location)
• After heal completed on the vmstore (took about an hour) 85% of the VM’s failed to launch with an error (see below).
VM broadsort is down with error. Exit message: Bad volume specification {'protocol': 'gluster', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x06'}, 'serial': 'b1bf3f56-a453-4383-a350-288bee06445b', 'index': 0, 'iface': 'virtio', 'apparentsize': '274877906944', 'specParams': {}, 'cache': 'none', 'imageID': 'b1bf3f56-a453-4383-a350-288bee06445b', 'truesize': '106767498240', 'type': 'disk', 'domainID': 'a7119613-a5ba-4a97-802b-0a985c647381', 'reqsize': '0', 'format': 'raw', 'poolID': '699fd2d6-c461-11e9-8b83-00163e18a045', 'device': 'disk', 'path': 'vmstore/a7119613-a5ba-4a97-802b-0a985c647381/images/b1bf3f56-a453-4383-a350-288bee06445b/25b0ab77-8f4c-42a1-9416-27db4cd25b39', 'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': '25b0ab77-8f4c-42a1-9416-27db4cd25b39', 'diskType': 'network', 'alias': 'ua-b1bf3f56-a453-4383-a350-288bee06445b', 'hosts': [{'name': 'glust01.mydomain.local', 'port': '0'}], 'discard': False}.
Everyone of the VM’s had this same error and I had to find backups and old images to bring them back online. I deleted some of the corrupted VM’s that I had current images of, to get them back up.
You shouldn’t be afraid to reboot 1, 2, 3, or even all of your switches at once because of a human error, power outage, or a simple update. Then have to worry about your VM’s getting corrupted because of this concerns me greatly. I am now thinking that I didn't setup oVirt with Gluster Hyperconverged correctly because of this issue. Have I missed something in the documentation or network layout/setup that would prevent this from happening again? I have searched the web for a few days now trying to find threads related to my situation with no luck.
I want to thank you for your time and it is greatly appreciated!
4 years, 11 months
oVirt and Gluster Hyperconverged - VM's Bad volume specification
by thomas.rockey@datasphere.com
I am new to this forum so I apologize before hand if I don’t present right content correctly or miss the content you need.
Background:
By no means am I an expert with Ovirt and glusterfs. That said I have been using, managing, building out Ovirt (single hosts) and oVirt with Gluster Hyperconverged environments for 5 years or more.
I started building out Ovirt environments with oVirt Engine Version: 3.6.7.5-1.el6 and earlier and now I’m using the latest oVirt with Gluster Hyperconverged.
Current hardware and software layout:
For the last 8 months I have been using a oVirt with Gluster Hyperconverged to host in total about 100 VM’s.
My hardware layout in one environment is 5 Dell R410 3 of them configured with Gluster Hyperconverged and the other 2 are just added hosts. Below is a detailed list!
Manufacturer: Dell Inc. PowerEdge R410
CPU Model Name: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
CPU Cores per Socket: 4
CPU Type: Intel Westmere IBRS SSBD Family
Dell PERC H700
4 SAS Seagate 4 TB drives 7.2k
2 one gig links – NIC 1 for frontend and NIC 2 for gluster backend
My software layout is:
OS Version: RHEL - 7 - 7.1908.0.el7.centos
OS Description: CentOS Linux 7 (Core)
Kernel Version: 3.10.0 - 1062.9.1.el7.x86_64
KVM Version: 2.12.0 - 33.1.el7_7.4
LIBVIRT Version: libvirt-4.5.0-23.el7_7.3
VDSM Version: vdsm-4.30.38-1.el7
SPICE Version: 0.14.0 - 7.el7
GlusterFS Version: glusterfs-6.6-1.el7
CEPH Version: librbd1-10.2.5-4.el7
Open vSwitch Version: openvswitch-2.11.0-4.el7
Kernel Features: PTI: 1, IBRS: 0, RETP: 1, SSBD: 3
VNC Encryption: Disabled
My network layout is:
3 HP 3800-48G-4SFP+ Switch (J9576A) running FULL MESH
Issue/timeline:
• All 3 of the HP 3800 were rebooted at the same time and were down for 5 to 10 seconds before they came back up (meaning pingable and responsive).
• A little more than 85% (36 or so) of the VM’s I had running all went into a pause state, do to and unknow storage error.
• The gluster volume heal state went all the way up to 2300 on vmstore (OS data location)
• After heal completed on the vmstore (took about an hour) 85% of the VM’s failed to launch with an error (see below).
VM broadsort is down with error. Exit message: Bad volume specification {'protocol': 'gluster', 'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x06'}, 'serial': 'b1bf3f56-a453-4383-a350-288bee06445b', 'index': 0, 'iface': 'virtio', 'apparentsize': '274877906944', 'specParams': {}, 'cache': 'none', 'imageID': 'b1bf3f56-a453-4383-a350-288bee06445b', 'truesize': '106767498240', 'type': 'disk', 'domainID': 'a7119613-a5ba-4a97-802b-0a985c647381', 'reqsize': '0', 'format': 'raw', 'poolID': '699fd2d6-c461-11e9-8b83-00163e18a045', 'device': 'disk', 'path': 'vmstore/a7119613-a5ba-4a97-802b-0a985c647381/images/b1bf3f56-a453-4383-a350-288bee06445b/25b0ab77-8f4c-42a1-9416-27db4cd25b39', 'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': '25b0ab77-8f4c-42a1-9416-27db4cd25b39', 'diskType': 'network', 'alias': 'ua-b1bf3f56-a453-4383-a350-288bee06445b', 'hosts': [{'name': 'glust01.mydomain.local', 'port': '0'}], 'discard': False}.
Everyone of the VM’s had this same error and I had to find backups and old images to bring them back online. I deleted some of the VM’s that I had current images of to get them back up.
You shouldn’t be afraid to reboot 1, 2, 3, or even all of your switches at once because of a human error, power outage, or a simple update. Then have to worry about your VM’s getting corrupted because of this concerns me greatly that I didn’t setup oVirt with Gluster Hyperconverged correctly. Have I missed something in the documentation or network layout that would prevent this for happing?
I want to thank you for your time and it is greatly appreciated!
4 years, 11 months
Failed to add host using ansible runner
by Eyal Shenitzky
Hi,
I am failing to add a new host to my env with the following message:
2019-12-25 10:57:17,587+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-63)
[41ec72c1-88e2-402b-8bb9-f38c678d0bf0] EVENT_ID: VDS_INSTALL_FAILED(505),
Host 10.35.0.158 installation failed. Failed to execute Ansible
host-deploy role:
null. Please check logs for more details:
/home/engine/ovirt-engine/var/log/ovirt-engine/host-deploy/ovirt-host-deploy-ansible-20191225105714-10.35.0.158-41ec72c1-88e2-402b-8bb9-f38c678d0bf0.log.
The host is Fedora-30
Ansible version is - 2.9.1
Ansible runner version - 1.3.4
/etc/ansible-runner-service/config.yaml:
---
version: 1
target_user: root
playbooks_root_dir:
'/home/engine/ovirt-engine/share/ovirt-engine/ansible-runner-service-project'
ssh_private_key:
'/home/engine/ovirt-engine/etc/pki/ovirt-engine/keys/engine_id_rsa'
port: 50001
target_user: root
There are no logs at all at the specified location in the error message.
Did someone encounter that issue?
Thanks,
--
Regards,
Eyal Shenitzky
4 years, 11 months
New dependency for development environment
by Ondra Machacek
Hello,
we are going to merge a series of patches to master branch, which
integrates ansible-runner with oVirt engine. When the patches will be
merged you will need to install new package called ansible-runner-
service-dev, and follow instructions so your dev-env will keep working
smoothly(all relevant info will be also in README.adoc):
1) sudo dnf update ovirt-release-master
2) sudo dnf install -y ansible-runner-service-dev
3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:
---
playbooks_root_dir:
'$PREFIX/share/ovirt-engine/ansible-runner-service-project'
ssh_private_key: '$PREFIX/etc/pki/ovirt-engine/keys/engine_id_rsa'
port: 50001
target_user: root
Where `$PREFIX` is the prefix of your development environment prefix,
which you've specified during the compilation of the engine.
4) Restart and enable ansible-runner-service:
# systemctl restart ansible-runner-service
# systemctl enable ansible-runner-service
That's it, your dev-env should start using the ansible-runner-service
for host-deployment etc.
Please note that only Fedora 30/31 and Centos7 was packaged, and are
natively supported!
Thanks,
Ondra
4 years, 11 months
OST is failing - Last successful run was Dec-13-2019
by Amit Bawer
Seems we have NFS permissions issue for el8 vdsm in some of the runs.
Example from
https://jenkins.ovirt.org/view/Amit/job/ovirt-system-tests_manual/6302/ar...
:
2020-01-03 12:07:34,169-0500 INFO (MainThread) [vds] (PID: 1264) I am the
actual vdsm 4.40.0.1458.git1fca84350 lago-basic-suite-master-host-1
(4.18.0-80.11.2.el8_0.x86_64) (vdsmd:152)...
2020-01-03 12:50:29,662-0500 ERROR (check/loop) [storage.Monitor] Error
checking path /rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata
(monitor:501)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/monitor.py", line
499, in _pathChecked
delay = result.delay()
File "/usr/lib/python3.6/site-packages/vdsm/storage/check.py", line 391,
in delay
raise exception.MiscFileReadException(self.path, self.rc, self.err)
vdsm.storage.exception.MiscFileReadException: Internal file read failure:
('/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata',
1, bytearray(b"/usr/bin/dd: failed to open
\'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata\':
Operation not permitted\n"))
2020-01-03 12:50:30,112-0500 DEBUG (jsonrpc/7) [jsonrpc.JsonRpcServer]
Calling 'StoragePool.disconnect' in bridge with {'storagepoolID':
'c90b137f-6e1f-4b9a-9612-da58910a2439', 'hostID': 2, 'scsiKey':
'c90b137f-6e1f-4b9a-9612-da58910a2439'} (__init__:329)
2020-01-03 12:50:30,114-0500 INFO (jsonrpc/7) [vdsm.api] START
disconnectStoragePool(spUUID='c90b137f-6e1f-4b9a-9612-da58910a2439',
hostID=2, remove=False, options=None) from=::ffff:192.168.201.4,38786,
flow_id=8d05a1, task_id=95573498-d1c7-41ad-ad33-28f2192b2b60 (api:48)
Probably need to set NFS server export options as in
https://bugzilla.redhat.com/show_bug.cgi?id=1776843#c7
4 years, 11 months
Unexpected exception when trying to add a storage domain
by Dana Elfassy
Hi,
When trying to add a storage domain to a 4.4 host I'm getting this error
message:
Error while executing action New NFS Storage Domain: Unexpected exception
The errors from vdsm.log:
2020-01-02 09:38:33,578-0500 ERROR (jsonrpc/0) [storage.initSANLock] Cannot
initialize SANLock for domain 6ca1e203-5595-47e5-94b8-82a7e69d99a9
(clusterlock:259)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line
250, in initSANLock
lockspace_name, idsPath, align=alignment, sector=block_size)
sanlock.SanlockException: (-202, 'Sanlock lockspace write failure', 'IO
timeout')
2020-01-02 09:38:33,579-0500 INFO (jsonrpc/0) [vdsm.api] FINISH
createStorageDomain error=Could not initialize cluster lock: ()
from=::ffff:192.168.100.1,36452, flow_id=17dc614
e, task_id=05c2107a-4d59-48d0-a2f7-0938f051c9ab (api:52)
2020-01-02 09:38:33,582-0500 ERROR (jsonrpc/0) [storage.TaskManager.Task]
(Task='05c2107a-4d59-48d0-a2f7-0938f051c9ab') Unexpected error (task:874)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line
250, in initSANLock
lockspace_name, idsPath, align=alignment, sector=block_size)
sanlock.SanlockException: (-202, 'Sanlock lockspace write failure', 'IO
timeout')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 881,
in _run
return fn(*args, **kargs)
File "<decorator-gen-121>", line 2, in createStorageDomain
File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2622,
in createStorageDomain
max_hosts=max_hosts)
File "/usr/lib/python3.6/site-packages/vdsm/storage/nfsSD.py", line 120,
in create
fsd.initSPMlease()
File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 974, in
initSPMlease
return self._manifest.initDomainLock()
File "/usr/lib/python3.6/site-packages/vdsm/storage/sd.py", line 620, in
initDomainLock
self._domainLock.initLock(self.getDomainLease())
File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line
308, in initLock
block_size=self._block_size)
File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line
260, in initSANLock
raise se.ClusterLockInitError()
vdsm.storage.exception.ClusterLockInitError: Could not initialize cluster
lock: ()
2020-01-02 09:38:33,583-0500 INFO (jsonrpc/0) [storage.TaskManager.Task]
(Task='05c2107a-4d59-48d0-a2f7-0938f051c9ab') aborting: Task is aborted:
'value=Could not initialize cluster lock: () abortedcode=701' (task:1184)
2020-01-02 09:38:33,584-0500 ERROR (jsonrpc/0) [storage.Dispatcher] FINISH
createStorageDomain error=Could not initialize cluster lock: ()
(dispatcher:83)
2020-01-02 09:38:33,584-0500 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 701) in 18.70 seconds (__init__:312)
2020-01-02 09:38:33,730-0500 INFO (jsonrpc/2) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'password':
'********', 'protocol_version': 'auto', 'port': '', 'iqn': '',
'connection': 'vserver-spider.eng.lab.tlv.redhat.com:/pub/delfassy/nfs1',
'ipv6_enabled': 'false', 'id': 'bf075967-060b-4e60-9b0b-bb170fe073f9',
'user': '', 'tpgt': '1'}], options=None) from=::ffff:192.168.100.1,36452,
flow_id=f4f79738-b361-4688-95c5-3454f52b505d,
task_id=dd7266ed-69ec-4e5b-9216-850d9db8ea3b (api:48)
Can someone help me with it?
Thanks,
Dana
4 years, 11 months
Who's the owner of "vdsm_hooks/checkimages/before_vm_start.py"?
by Pavel Bar
Hi,
Can someone please point me to a relevant person?
It looks like there is a potential issue with "*getImageSize()*" function
there:
*'image_bytes' might be referenced before assignment*.
There is also a suspicion that this code is not used at all, so instead of
fixing it might be a good idea to just delete it.
See the code below:
def getImageSize(disk_image, driver_type):
'''
Obtain qcow2 image size in GiBs
'''
if driver_type == 'block':
dev_buffer = ' ' * 8
with open(disk_image) as device:
dev_buffer = fcntl.ioctl(device.fileno(), BLKGETSIZE64, dev_buffer)
image_bytes = struct.unpack(FORMAT, dev_buffer)[0]
elif driver_type == 'file':
image_bytes = os.stat(disk_image).st_size
return float(image_bytes / GIB)
Thank you in advance!
Pavel
4 years, 11 months
OST Fails for missing glusterfs mirrors at host-deploy
by Amit Bawer
Snippet From:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6293/console
23:31:25 + cd
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master
23:31:25 + lago ovirt deploy
23:31:26 @ Deploy oVirt environment:
23:31:26 # Deploy environment:
23:31:26 * [Thread-2] Deploy VM lago-basic-suite-master-host-0:
23:31:26 * [Thread-3] Deploy VM lago-basic-suite-master-host-1:
23:31:26 * [Thread-4] Deploy VM lago-basic-suite-master-engine:
23:32:15 * [Thread-3] Deploy VM lago-basic-suite-master-host-1: Success
(in 0:00:49)
23:32:39 STDERR
23:32:39 + yum -y install ovirt-host
23:32:39 Error: Error downloading packages:
23:32:39 Cannot download glusterfs-6.6-1.el8.x86_64.rpm: All mirrors were
tried
23:32:39
23:32:39 - STDERR
23:32:39 + yum -y install ovirt-host
23:32:39 Error: Error downloading packages:
23:32:39 Cannot download glusterfs-6.6-1.el8.x86_64.rpm: All mirrors were
tried
23:32:39
23:32:39 * [Thread-2] Deploy VM lago-basic-suite-master-host-0: ERROR
(in 0:01:13)
23:38:05 * [Thread-4] Deploy VM lago-basic-suite-master-engine: ERROR
(in 0:06:39)
23:38:05 # Deploy environment: ERROR (in 0:06:39)
23:38:06 @ Deploy oVirt environment: ERROR (in 0:06:39)
23:38:06 Error occured, aborting
23:38:06 Traceback (most recent call last):
23:38:06 File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
383, in do_run
23:38:06 self.cli_plugins[args.ovirtverb].do_run(args)
23:38:06 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
line 184, in do_run
23:38:06 self._do_run(**vars(args))
23:38:06 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 573,
in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 584,
in wrapper
23:38:06 return func(*args, prefix=prefix, **kwargs)
23:38:06 File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
181, in do_deploy
23:38:06 prefix.deploy()
23:38:06 File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
636, in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06 File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py",
line 127, in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06 File "/usr/lib/python2.7/site-packages/ovirtlago/prefix.py",
line 284, in deploy
23:38:06 return super(OvirtPrefix, self).deploy()
23:38:06 File "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line
50, in wrapped
23:38:06 return func(*args, **kwargs)
23:38:06 File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
636, in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
1671, in deploy
23:38:06 self.virt_env.get_vms().values()
23:38:06 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 104,
in invoke_in_parallel
23:38:06 return vt.join_all()
23:38:06 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58,
in _ret_via_queue
23:38:06 queue.put({'return': func()})
23:38:06 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
1662, in _deploy_host
23:38:06 host.name(),
23:38:06 LagoDeployError:
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_home_jenkins_agent_workspace_ovirt-system-tests_manual_ovirt-system-tests_basic-suite-master_deploy-scripts_setup_1st_host_el7.sh
failed with status 1 on lago-basic-suite-master-host-0
23:38:06 + res=1
23:38:06 + cd -
23:38:06
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests
23:38:06 + return 1
23:38:06 + env_collect
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
23:38:06 + local
tests_out_dir=/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
23:38:06 + [[ -e
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master
]]
23:38:06 + mkdir -p
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master
23:38:06 + cd
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master/current
23:38:06 + lago collect --output
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
23:38:08 @ Collect artifacts:
23:38:08 # [Thread-1] lago-basic-suite-master-host-0:
23:38:08 # [Thread-2] lago-basic-suite-master-host-1:
23:38:08 # [Thread-3] lago-basic-suite-master-engine:
23:38:10 # [Thread-1] lago-basic-suite-master-host-0: Success (in 0:00:02)
23:38:10 # [Thread-2] lago-basic-suite-master-host-1: Success (in 0:00:02)
23:38:16 # [Thread-3] lago-basic-suite-master-engine: Success (in 0:00:07)
23:38:16 @ Collect artifacts: Success (in 0:00:07)
23:38:16 + cp -a logs
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy/lago_logs
23:38:16 + cd -
23:38:16
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests
23:38:16 + echo '@@@ ERROR: Failed in deploy stage'
23:38:16 @@@ ERROR: Failed in deploy stage
4 years, 11 months