[ANN] oVirt 4.3.0 Second Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.3.0, as of January 16th, 2018
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first release candidate of the 4.3.0 version.
This release brings more than 130 enhancements and more than 440 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).
- oVirt Node NG is already available for both CentOS 7 and Fedora 28 (tech
preview).
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 11 months
Disk full
by suporte@logicworks.pt
Hi,
I have a all in one intallation with 2 glusters volumes.
The disk of one VM filled up the brick, which is a partition. That partition has 0% free disk space.
I moved the disk of that VM to the other gluster volume, the VM is working with the disk on the other gluster volume.
When I move the disk, it didn't delete it from the brick, the engine keeps complaining that there is no more disk space on that volume.
What can I do?
Is there a way to prevent this in the future?
Many thanks
José
--
Jose Ferradeira
http://www.logicworks.pt
5 years, 11 months
[Call for Testing] oVirt 4.3.0
by Sandro Bonazzola
Hi,
we are planning to release a 4.3.0 RC2 tomorrow morning, January 16th 2019.
We have a scheduled final release for oVirt 4.3.0 on January 29th: this is
the time when testing is most effective to ensure the release will be as
much stable as possible. Please join us testing the RC2 release this week
and reporting issues to
https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
Please remember this is still pre-release material, we recommend not
installing it on production environments yet.
Thanks,
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 11 months
Ansible and SHE deployment
by Vrgotic, Marko
Dear oVirt team,
I would like to ask you help in get some general guidelines, do’s & don’ts in deploying complete oVirt environment using Ansible.
The first Production deployment I made was done manual:
12 Hypervsiors – all exact same HW Brand and Specs
3/12 used for HA Env for SHE
oVirt version 4.2.1 (now we are at 4.2.7)
4Gluster nodes, managed externally of oVirt
This is environment I would like to convert into deployable by Ansible
Atm, I am working on second Production env, for Eng/Dev department, and I want to go all way Ansible.
I am aware of your playbooks, and location on github, but what I want to ask is an advice on how to approach using them:
The second Env will have:
7Hypervisors different specs / all provisioned using Foreman
oVirt version, latest 4.2.x at that point.
3/7 providing HA for SHE engine
Storage used is to be NetApp.
Please let me know how to proceed with modifying Ansible playbooks and what should be the recommended executing order, and what to look for? Also, If you need additional info, I will be happy to provide.
Kind regards,
Marko Vrgotic
5 years, 11 months
Import existing storage domain into new server
by michael@wanderingmad.com
I had original ovirt-engine that had stuck tasks and just had storage issues. I decided to load a new engine appliance and import existing servers and VMs into new engine. This actually worked perfect until I went to import the storage domain into new engine, which has failed. I was unable to move the domain to maintenance mode or anything from the old engine. Any way to force import?
Error messages / logs:
Failed to attach Storage Domains to Data Center Default. (User: admin@internal-authz) (in GUI log)
engine.log:
2019-01-13 16:31:27,973-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [176267b0] START, GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 2e7162ed
2019-01-13 16:31:28,204-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,205-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,206-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,208-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,209-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,221-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler8) [176267b0] Could not associate brick '10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:28,232-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler8) [176267b0] FINISH, GlusterVolumesListVDSCommand, return: {7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f, fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb}, log id: 2e7162ed
2019-01-13 16:31:37,612-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzScheduler10) [2876ae97] START, GlusterTasksListVDSCommand(HostName = daedalus.redforest.wanderingmad.com, VdsIdVDSCommandParametersBase:{hostId='b79fe49c-761f-4710-adf4-8fa6d1143dae'}), log id: 30c3bd7c
2019-01-13 16:31:37,863-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand] (DefaultQuartzScheduler10) [2876ae97] FINISH, GlusterTasksListVDSCommand, return: [], log id: 30c3bd7c
2019-01-13 16:31:43,248-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] START, GlusterServersListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, VdsIdVDSCommandParametersBase:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 7f890a8
2019-01-13 16:31:43,705-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] FINISH, GlusterServersListVDSCommand, return: [10.100.50.12/24:CONNECTED, icarus.redforest.wanderingmad.com:CONNECTED, daedalus.redforest.wanderingmad.com:CONNECTED], log id: 7f890a8
2019-01-13 16:31:43,708-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] START, GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 50f59aca
2019-01-13 16:31:43,943-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,944-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,945-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,947-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,948-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,949-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler7) [70a256c0] Could not associate brick '10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:43,949-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler7) [70a256c0] FINISH, GlusterVolumesListVDSCommand, return: {7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f, fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb}, log id: 50f59aca
2019-01-13 16:31:58,987-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] START, GlusterServersListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, VdsIdVDSCommandParametersBase:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 512ef5c0
2019-01-13 16:31:59,518-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] FINISH, GlusterServersListVDSCommand, return: [10.100.50.12/24:CONNECTED, icarus.redforest.wanderingmad.com:CONNECTED, daedalus.redforest.wanderingmad.com:CONNECTED], log id: 512ef5c0
2019-01-13 16:31:59,521-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] START, GlusterVolumesListVDSCommand(HostName = prometheus.redforest.wanderingmad.com, GlusterVolumesListVDSParameters:{hostId='b5b9e763-704c-47ee-817b-19d081ad9aab'}), log id: 2de38416
2019-01-13 16:31:59,773-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,774-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.14:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,775-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.12:/gluster_bricks/nvme-storagetwo/nvme-storagetwo' of volume 'fe0c3620-6a33-48fa-98b7-847ad76edaba' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,776-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick 'Daedalus.redforest.wanderingmad.com:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,777-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.14:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,778-05 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler4) [59b635f4] Could not associate brick '10.100.50.12:/gluster_bricks/ssd-storagetwo/ssd-storagetwo' of volume '7add3797-9b03-433c-aa1a-90ad7c5ac280' with correct network as no gluster network found in cluster '8e8cc8a7-9f45-4054-85e5-8014ec421b7f'
2019-01-13 16:31:59,778-05 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler4) [59b635f4] FINISH, GlusterVolumesListVDSCommand, return: {7add3797-9b03-433c-aa1a-90ad7c5ac280=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4205765f, fe0c3620-6a33-48fa-98b7-847ad76edaba=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@40fccb}, log id: 2de38416
and of course, none of the gluster hosts is set as the SPM
5 years, 11 months
unable to create templates or upload files
by michael@wanderingmad.com
ovirt stopped accepting new deployments or uploading ISOs, they just stay as locked disks. I checked ovirt-engine log and see constant errors like below:
ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-2) [] OAuthException access_denied: Cannot authenticate user 'admin@N/A': No valid profile found in credentials..
I checked the ovn provider in the admin portal and it's set correctly to "admin@internal" for the ovn provider
5 years, 11 months
External private ova / image repository
by Leo David
Hello Everyone,
I am not sure what would it be the pieces needed to have an external repo
that I can manage and use at the client site for downloading customized
templates.
ie: how an external docker repo works
Any ideeas on this ?
Thank you !
Have a nice day,
Leo
5 years, 11 months
oVirt Node install - kickstart postintall
by jeanbaptiste@nfrance.com
Hello everybody,
Since days, I'm trying to install oVirt (via Foreman) in network mode (TFTP net install).
All is great, but I want to make some actions in postinstall (%post).
Some actions are relatated to /etc/sysconfig/network-interfaces and another action is related to root authorized_keys.
When I try to add pub-key to a created authorized_keys for root, I work (verified into anaconda.
But after installation and anaconda reboot, I've noticed all my %post actions in /(root) are discared. After reboot, there is nothing in /root/.ssh for example.
Whereas, in /etc, all my modications are preserved.
I thought to a Selinux relative issue, but It is not relative to Selinux
I miss something. Please can you help to understand how oVirt install / partition work ?
thanks for all
5 years, 11 months
Ovirt 4.3 / New Install / NFS (broken)
by Devin Acosta
I installed the latest 4.3 release candidate and tried to add an NFS mount
to the Data Center, and it errors in the GUI with “Error while executing
action New NFS Storage Domain: Invalid parameter”, then in the vdsm.log I
see it is passing “block_size=None”. Does this regardless if NFS v3 or v4.
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] START
createStorageDomain(storageType=1,
sdUUID=u'b30c64c4-4b1f-4ebf-828b-e54c330ae84c', domainName=u'nfsdata',
typeSpecificArg=u'192.168.19.155:/data/data', domClass=1, domVersion=u'4',
block_size=None, max_hosts=2000, options=None)
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:48)
2019-01-12 20:37:58,241-0700 INFO (jsonrpc/7) [vdsm.api] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
from=::ffff:192.168.19.178,51042, flow_id=67743df7,
task_id=ad82f581-9638-48f1-bcd9-669b9809b34a (api:52)
2019-01-12 20:37:58,241-0700 ERROR (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
in _run
return fn(*args, **kargs)
File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2583,
in createStorageDomain
alignment = clusterlock.alignment(block_size, max_hosts)
File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line
661, in alignment
raise se.InvalidParameterException('block_size', block_size)
InvalidParameterException: Invalid parameter: 'block_size=None'
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [storage.TaskManager.Task]
(Task='ad82f581-9638-48f1-bcd9-669b9809b34a') aborting: Task is aborted:
u"Invalid parameter: 'block_size=None'" - code 100 (task:1181)
2019-01-12 20:37:58,242-0700 ERROR (jsonrpc/7) [storage.Dispatcher] FINISH
createStorageDomain error=Invalid parameter: 'block_size=None'
(dispatcher:81)
2019-01-12 20:37:58,242-0700 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
call StorageDomain.create failed (error 1000) in 0.00 seconds (__init__:312)
2019-01-12 20:37:58,541-0700 INFO (jsonrpc/1) [vdsm.api] START
disconnectStorageServer(domType=1,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'tpgt': u'1',
u'id': u'db7d16c8-7497-42db-8a75-81cb7f9d3350', u'connection':
u'192.168.19.155:/data/data', u'iqn': u'', u'user': u'', u'ipv6_enabled':
u'false', u'protocol_version': u'auto', u'password': '********', u'port':
u''}], options=None) from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:48)
2019-01-12 20:37:58,542-0700 INFO (jsonrpc/1) [storage.Mount] unmounting
/rhev/data-center/mnt/192.168.19.155:_data_data (mount:212)
2019-01-12 20:37:59,087-0700 INFO (jsonrpc/1) [vdsm.api] FINISH
disconnectStorageServer return={'statuslist': [{'status': 0, 'id':
u'db7d16c8-7497-42db-8a75-81cb7f9d3350'}]}
from=::ffff:192.168.19.178,51042,
flow_id=7e4cb4fa-1437-4d5b-acb5-958838ecd54c,
task_id=1d004ea2-ae84-4c95-8c70-29e205efd4b1 (api:54)
2019-01-12 20:37:59,089-0700 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call StoragePool.disconnectStorageServer succeeded in 0.55 seconds
(__init__:312)
Devin Acosta
5 years, 11 months