Re: Need to enable STP on ovirt bridges
by Strahil
What is your bandwidth threshold for the network used for VM migration ?
Can you set a 90 mbit/s threshold (yes, less than 100mbit/s) and try to migrate a small (1 GB RAM) VM ?
Do you see disconnects ?
If no, try a little bit up (the threshold) and check again.
Best Regards,
Strahil NikolovOn Aug 23, 2019 23:19, "Curtis E. Combs Jr." <ej.albany(a)gmail.com> wrote:
>
> It took a while for my servers to come back on the network this time.
> I think it's due to ovirt continuing to try to migrate the VMs around
> like I requested. The 3 servers' names are "swm-01, swm-02 and
> swm-03". Eventually (about 2-3 minutes ago) they all came back online.
>
> So I disabled and stopped the lldpad service.
>
> Nope. Started some more migrations and swm-02 and swm-03 disappeared
> again. No ping, SSH hung, same as before - almost as soon as the
> migration started.
>
> If you wall have any ideas what switch-level setting might be enabled,
> let me know, cause I'm stumped. I can add it to the ticket that's
> requesting the port configurations. I've already added the port
> numbers and switch name that I got from CDP.
>
> Thanks again, I really appreciate the help!
> cecjr
>
>
>
> On Fri, Aug 23, 2019 at 3:28 PM Dominik Holler <dholler(a)redhat.com> wrote:
> >
> >
> >
> > On Fri, Aug 23, 2019 at 9:19 PM Dominik Holler <dholler(a)redhat.com> wrote:
> >>
> >>
> >>
> >> On Fri, Aug 23, 2019 at 8:03 PM Curtis E. Combs Jr. <ej.albany(a)gmail.com> wrote:
> >>>
> >>> This little cluster isn't in production or anything like that yet.
> >>>
> >>> So, I went ahead and used your ethtool commands to disable pause
> >>> frames on both interfaces of each server. I then, chose a few VMs to
> >>> migrate around at random.
> >>>
> >>> swm-02 and swm-03 both went out again. Unreachable. Can't ping, can't
> >>> ssh, and the SSH session that I had open was unresponsive.
> >>>
> >>> Any other ideas?
> >>>
> >>
> >> Sorry, no. Looks like two different NICs with different drivers and frimware goes down together.
> >> This is a strong indication that the root cause is related to the switch.
> >> Maybe you can get some information about the switch config by
> >> 'lldptool get-tlv -n -i em1'
> >>
> >
> > Another guess:
> > After the optional 'lldptool get-tlv -n -i em1'
> > 'systemctl stop lldpad'
> > another try to migrate.
> >
> >
> >>
> >>
> >>>
> >>> On Fri, Aug 23, 2019 at 1:50 PM Dominik Holler <dholler(a)redhat.com> wrote:
> >>> >
> >>> >
> >>> >
> >>> > On Fri, Aug 23, 2019 at 6:45 PM Curtis E. Combs Jr. <ej.albany(a)gmail.com> wrote:
> >>> >>
> >>> >> Unfortunately, I can't check on the switch. Trust me, I've tried.
> >>> >> These servers are in a Co-Lo and I've put 5 tickets in asking about
> >>> >> the port configuration. They just get ignored - but that's par for the
> >>> >> coarse for IT here. Only about 2 out of 10 of our tickets get any
> >>> >> response and usually the response doesn't help. Then the system they
> >>> >> use auto-closes the ticket. That was why I was suspecting STP before.
> >>> >>
> >>> >> I can do ethtool. I do have root on these servers, though. Are you
> >>> >> trying to get me to turn off link-speed auto-negotiation? Would you
> >>> >> like me to try that?
> >>> >>
> >>> >
> >>> > It is just a suspicion, that the reason is pause frames.
> >>> > Let's start on a NIC which is not used for ovirtmgmt, I guess em1.
> >>> > Does 'ethtool -S em1 | grep pause' show something?
> >>> > Does 'ethtool em1 | grep pause' indicates support for pause?
> >>> > The current config is shown by 'ethtool -a em1'.
> >>> > '-A autoneg' "Specifies whether pause autonegotiation should be enabled." according to ethtool doc.
> >>> > Assuming flow control is enabled by default, I would try to disable it via
> >>> > 'ethtool -A em1 autoneg off rx off tx off'
> >>> > and check if it is applied via
> >>> > 'ethtool -a em1'
> >>> > and check if the behavior under load changes.
> >>> >
> >>> >
> >>> >
> >>> >>
> >>> >> On Fri, Aug 23, 2019 at 12:24 PM Dominik Holler <dholler(a)redhat.com> wrote:
> >>> >> >
> >>> >> >
> >>> >> >
> >>> >> > On Fri, Aug 23, 2019 at 5:49 PM Curtis E. Combs Jr. <ej.albany(a)gmail.com> wrote:
> >>> >> >>
> >>> >> >> Sure! Right now, I only have a 5
5 years, 3 months
Moving ovirt engine disk to another storage volume
by Erick Perez - Quadrian Enterprises
Good morning,
I am running Ovirt 4.3.5 in Centos 7.6 with one virt node an a NFS
storage node. I did the self-hosted engine setup and i plan to add a
second virt host in a few days.
I need to do heavy maintenance on the storage node (VDO and mdadm
things) and would like to know how (or a link to an article) can I
move the ovirt engine disk to another storage.
Currentl the NFS storage has two volumens (volA,volB) and the physical
host have spare space too. Virtual machines are in VolB and the engine
is in VolA.
I would like to move the engine disk from VolA to VolB or to local storage.
BTW I am not sure if I should say "move the engine" or should I say
move "hosted_storage domain" domain
thanks in advance.
---------------------
Erick Perez
5 years, 3 months
TASK [ovirt.hosted_engine_setup : Get local VM IP] Problem
by Emre Özkan
hi guys,
ı waiting this part when I try to install with hosted-engine --deploy command.have you got any knowledge about that.My environment(hyper-v running nested vm on azure and rhev running on hyper-v)
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Create local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Get local VM IP]
5 years, 3 months
numa pinning and reserved hugepages (1G) Bug in scheduler calculation or decision ?
by Ralf Schenk
Hello List,
i ran into problems using numa-pinning and reserved hugepages.
- My EPYC 7281 based Servers (Dual Socket) have 8 Numa-Nodes each having
32 GB of memory for a total of 256 GB System Memory
- I'm using 192 x 1 GB hugepages reserved on the kernel cmdline
default_hugepagesz=1G hugepagesz=1G hugepages=192 This reserves 24
hugepages on each numa-node.
I wanted to pin a MariaDB VM using 32 GB (Custom Property
hugepages=1048576) to numa-nodes 0-3 of CPU-Socket 1. Pinning in GUI
etc. no problem.
When trying to start the vm this can't be done since ovirt claims that
the host can't fullfill the memory requirements - which is simply not
correct since there were > 164 hugepages free.
It should have taken 8 hugepages from each numa node 0-3 to fullfill the
32 GB Memory requirement.
I also freed the system completely from other VM's but that didn't work
either.
Is it possible that the scheduler only takes into account the "free
memory" (as seen in numactl -H below) *not reserved* by hugepages for
its decisions ? Since the host has only < 8 GB of free mem per numa-node
I can understand that VM was not able to start under that condition.
VM is runnig and using 32 hugepages without pinning but a warning states
"VM dbserver01b does not fit to a single NUMA node on host
myhost.mydomain.de. This may negatively impact its performance. Consider
using vNUMA and NUMA pinning for this VM."
This is the numa Hardware Layout and hugepages usage now with other VM's
running:
from cat /proc/meminfo
HugePages_Total: 192
HugePages_Free: 160
HugePages_Rsvd: 0
HugePages_Surp: 0
I can confirm that also under the condition of running other VM's there
are at least 8 hugepages free for each numa-node 0-3:
grep ""
/sys/devices/system/node/*/hugepages/hugepages-1048576kB/free_hugepages
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages:8
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages:23
/sys/devices/system/node/node2/hugepages/hugepages-1048576kB/free_hugepages:20
/sys/devices/system/node/node3/hugepages/hugepages-1048576kB/free_hugepages:22
/sys/devices/system/node/node4/hugepages/hugepages-1048576kB/free_hugepages:16
/sys/devices/system/node/node5/hugepages/hugepages-1048576kB/free_hugepages:5
/sys/devices/system/node/node6/hugepages/hugepages-1048576kB/free_hugepages:19
/sys/devices/system/node/node7/hugepages/hugepages-1048576kB/free_hugepages:24
numactl -h:
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 32 33 34 35
node 0 size: 32673 MB
node 0 free: 3779 MB
node 1 cpus: 4 5 6 7 36 37 38 39
node 1 size: 32767 MB
node 1 free: 6162 MB
node 2 cpus: 8 9 10 11 40 41 42 43
node 2 size: 32767 MB
node 2 free: 6698 MB
node 3 cpus: 12 13 14 15 44 45 46 47
node 3 size: 32767 MB
node 3 free: 1589 MB
node 4 cpus: 16 17 18 19 48 49 50 51
node 4 size: 32767 MB
node 4 free: 2630 MB
node 5 cpus: 20 21 22 23 52 53 54 55
node 5 size: 32767 MB
node 5 free: 2487 MB
node 6 cpus: 24 25 26 27 56 57 58 59
node 6 size: 32767 MB
node 6 free: 3279 MB
node 7 cpus: 28 29 30 31 60 61 62 63
node 7 size: 32767 MB
node 7 free: 5513 MB
node distances:
node 0 1 2 3 4 5 6 7
0: 10 16 16 16 32 32 32 32
1: 16 10 16 16 32 32 32 32
2: 16 16 10 16 32 32 32 32
3: 16 16 16 10 32 32 32 32
4: 32 32 32 32 10 16 16 16
5: 32 32 32 32 16 10 16 16
6: 32 32 32 32 16 16 10 16
7: 32 32 32 32 16 16 16 10
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
5 years, 3 months
Need to enable STP on ovirt bridges
by ej.albany@gmail.com
Hello. I have been trying to figure out an issue for a very long time.
That issue relates to the ethernet and 10gb fc links that I have on my
cluster being disabled any time a migration occurs.
I believe this is because I need to have STP turned on in order to
participate with the switch. However, there does not seem to be any
way to tell oVirt to stop turning it off! Very frustrating.
After entering a cronjob that enables stp on all bridges every 1
minute, the migration issue disappears....
Is there any way at all to do without this cronjob and set STP to be
ON without having to resort to such a silly solution?
Here are some details about my systems, if you need it.
selinux is disabled.
[root@swm-02 ~]# rpm -qa | grep ovirt
ovirt-imageio-common-1.5.1-0.el7.x86_64
ovirt-release43-4.3.5.2-1.el7.noarch
ovirt-imageio-daemon-1.5.1-0.el7.noarch
ovirt-vmconsole-host-1.0.7-2.el7.noarch
ovirt-hosted-engine-setup-2.3.11-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.26-1.el7.noarch
python2-ovirt-host-deploy-1.8.0-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
cockpit-machines-ovirt-195.1-1.el7.noarch
ovirt-hosted-engine-ha-2.3.3-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
cockpit-ovirt-dashboard-0.13.5-1.el7.noarch
ovirt-provider-ovn-driver-1.2.22-1.el7.noarch
ovirt-host-deploy-common-1.8.0-1.el7.noarch
ovirt-host-4.3.4-1.el7.x86_64
python-ovirt-engine-sdk4-4.3.2-2.el7.x86_64
ovirt-host-dependencies-4.3.4-1.el7.x86_64
ovirt-ansible-repositories-1.1.5-1.el7.noarch
[root@swm-02 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@swm-02 ~]# uname -r
3.10.0-957.27.2.el7.x86_64
You have new mail in /var/spool/mail/root
[root@swm-02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
test state UP group default qlen 1000
link/ether d4:ae:52:8d:50:48 brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether d4:ae:52:8d:50:49 brd ff:ff:ff:ff:ff:ff
4: p1p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovirtmgmt state UP group default qlen 1000
link/ether 90:e2:ba:1e:14:80 brd ff:ff:ff:ff:ff:ff
5: p1p2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 90:e2:ba:1e:14:81 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
group default qlen 1000
link/ether a2:b8:d6:e8:b3:d8 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 96:a0:c1:4a:45:4b brd ff:ff:ff:ff:ff:ff
25: test: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether d4:ae:52:8d:50:48 brd ff:ff:ff:ff:ff:ff
inet 10.15.11.21/24 brd 10.15.11.255 scope global test
valid_lft forever preferred_lft forever
26: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state UP group default qlen 1000
link/ether 90:e2:ba:1e:14:80 brd ff:ff:ff:ff:ff:ff
inet 10.15.28.31/24 brd 10.15.28.255 scope global ovirtmgmt
valid_lft forever preferred_lft forever
27: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
group default qlen 1000
link/ether 62:e5:e5:07:99:eb brd ff:ff:ff:ff:ff:ff
29: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
ovirtmgmt state UNKNOWN group default qlen 1000
link/ether fe:6f:9c:95:00:02 brd ff:ff:ff:ff:ff:ff
[root@swm-02 ~]# free -m
total used free shared buff/cache available
Mem: 64413 1873 61804 9 735 62062
Swap: 16383 0 16383
[root@swm-02 ~]# free -h
total used free shared buff/cache available
Mem: 62G 1.8G 60G 9.5M 735M 60G
Swap: 15G 0B 15G
[root@swm-02 ~]# ls
ls lsb_release lshw lslocks
lsmod lspci lssubsys
lsusb.py
lsattr lscgroup lsinitrd lslogins
lsns lss16toppm lstopo-no-graphics
lsblk lscpu lsipc lsmem
lsof lsscsi lsusb
[root@swm-02 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 44
Model name: Intel(R) Xeon(R) CPU X5672 @ 3.20GHz
Stepping: 2
CPU MHz: 3192.064
BogoMIPS: 6384.12
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep
mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht
tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts
rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq
dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca
sse4_1 sse4_2 popcnt aes lahf_lm ssbd ibrs ibpb stibp tpr_shadow vnmi
flexpriority ept vpid dtherm ida arat spec_ctrl intel_stibp flush_l1d
[root@swm-02 ~]#
5 years, 3 months
Migration from bare metal engine to hosted doesn't seem to work at all in 4.3.2
by Clinton Goudie-Nice
Hi all,
I'm trying to migrate from an ovirt bare metal instance to the hosted
appliance.
The instructions here:
https://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_...
are
at best murky, and at worst quite wrong, and I'm looking for some help.
The instructions say "You must answer No to the following question so that
you can restore the BareMetal-Engine backup file on HostedEngine-VM before
running engine-setup. Automatically execute engine-setup on the engine
appliance on first boot (Yes, No)[Yes]? No"
Makes complete sense, except this question is not asked at all.
Some research on the internet says we should be able to do this via
append-answers:
[environment:default]
OVEHOSTED_VM/cloudinitExecuteEngineSetup=bool:False
Except this doesn't work either. When the appliance deploys the engine is
setup and already running.
How do I take a backup file, and a backup log, and deploy them into a
hosted engine?
Clint
5 years, 3 months
Creating hosts via the REST API using SSH Public Key authentication
by schill-julian@gmx.de
In the UI one can create hosts using two authentication methods: 'Password' and 'SSH Public Key'.
I have only found the Password authentication in the API Docs (/ovirt-engine/apidoc/#/services/hosts/methods/add).
My question is: How can i create hosts using SSH Public Key authentication via the REST API?
I would appreciate an example POST request!
5 years, 3 months
nodectl on plain CentOS hypervisors
by Gianluca Cecchi
Does it make sense to install nodectl utility on plain CentOS 7.x nodes?
Or any other alternative for plain OS nodes vs ovirt-node-ng ones?
On my updated CentOS 7.6 oVirt node I have not the command; I think it is
provided by the package ovirt-node-ng-nodectl, that is one of the available
ones if I run "yum search" on the system.
Thanks
Gianluca
5 years, 3 months
oVirt 3.6: Node went into Error state while migrations happening
by Christopher Cox
On the node in question, the metadata isn't coming across (state) wise.
It shows VMs being in an unknown state (some are up and some are down),
some show as migrating and there are 9 forever hung migrating tasks. We
tried to bring up some of the down VMs that had a state of Down, but
that ended up getting them the state of "Wait for Lauch", though those
VMs are actually started.
Right now, my plan is attempt a restart of vdsmd on the node in
question. Just trying to get the node to a working state again. There
a total of 9 nodes in our cluster, but we can't manage any VMs on the
affected node right now.
Is there a way in 3.6 to cancel the hung tasks? I'm worried that if
vdsmd is restarted on the node, the tasks might be "attempted"... I
really need them to be forgotten if possible.
Ideally want all "Unknown" to return to either an "up" or "down" state
(depending if the VM is up or down) and for "Wait for Launch" for those,
to go to "up" and for all the "Migrating" to go to "up" or "down" (I
think only one is actually down).
I'm concerned that any attempt manually maniplate the state in the ovirt
mgmt head db will be moot because the node will be queried for state and
that state will be taken and override anything I attempt to do.
Thoughts??
5 years, 3 months
When I create a new domain in storage , it report : VDSM command ActivateStorageDomainVDS failed: Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842', )
by wangyu13476969128@126.com
The version of ovirt-engine is 4.2.8
The version of ovirt-node is 4.2.8
When I create a new domain in storage , the storage type is NFS , it report :
VDSM command ActivateStorageDomainVDS failed: Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)
The error of vdsm.log is :
2019-08-23 11:02:14,740+0800 INFO (jsonrpc/4) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'c6893a09-ab28-4328-b186-d2f88f2320d4', u'connection': u'172.16.10.74:/ovirt-data', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:172.16.90.10,52962, flow_id=5d720a32, task_id=2af924ff-37a6-46a1-b79f-4251d21d5ff9 (api:46)
2019-08-23 11:02:14,743+0800 INFO (jsonrpc/4) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'c6893a09-ab28-4328-b186-d2f88f2320d4'}]} from=::ffff:172.16.90.10,52962, flow_id=5d720a32, task_id=2af924ff-37a6-46a1-b79f-4251d21d5ff9 (api:52)
2019-08-23 11:02:14,743+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.00 seconds (__init__:573)
2019-08-23 11:02:14,751+0800 INFO (jsonrpc/6) [vdsm.api] START activateStorageDomain(sdUUID=u'3bcdc32c-040e-4a4c-90fb-de950f54f1b4', spUUID=u'b87012a1-8f7a-4af5-8884-e0fb8002e842', options=None) from=::ffff:172.16.90.10,53080, flow_id=709ee722, task_id=9b8f32af-5fdd-4ffa-9520-c474af03db70 (api:46)
2019-08-23 11:02:14,752+0800 INFO (jsonrpc/6) [vdsm.api] FINISH activateStorageDomain error=Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',) from=::ffff:172.16.90.10,53080, flow_id=709ee722, task_id=9b8f32af-5fdd-4ffa-9520-c474af03db70 (api:50)
2019-08-23 11:02:14,752+0800 ERROR (jsonrpc/6) [storage.TaskManager.Task] (Task='9b8f32af-5fdd-4ffa-9520-c474af03db70') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in activateStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1262, in activateStorageDomain
pool = self.getPool(spUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 350, in getPool
raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)
2019-08-23 11:02:14,752+0800 INFO (jsonrpc/6) [storage.TaskManager.Task] (Task='9b8f32af-5fdd-4ffa-9520-c474af03db70') aborting: Task is aborted: "Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',)" - code 309 (task:1181)
2019-08-23 11:02:14,752+0800 ERROR (jsonrpc/6) [storage.Dispatcher] FINISH activateStorageDomain error=Unknown pool id, pool not connected: (u'b87012a1-8f7a-4af5-8884-e0fb8002e842',) (dispatcher:82)
How can I solve this problem ?
5 years, 3 months