Ongoing "VM is not responding" and "ETL sampling errors".
by J Brian Ismay
Hello the list,
I have been slowly bringing up a 9-node cluster for the last few months.
All nodes are identical, dual 2-port 10G nics, lots of memory and CPU
Storage is a Netapp Filer accessed via NFS on a dedicated 10Gb
dual-switch environment.
Generally everything is working fine, but ever since our last rebuild of
the cluster in preperation for a move into production status we have
been getting repeated errors showing in the HostedEngine console:
VM foo is not responding.
VM bar is not responding.
VM baz is not responding.
These errors happen on a fairly regular basis, and generally are
multiple VMs all being hosted by different nodes. When errors occur I
also lose external connectivity to the VM in question, both via its
service IP address and via the ovirt console. The actual outages appear
to generally last 15-20 seconds and then things recover and go back to
normal.
We are also getting much more frequent errors:
ETL service sampling has encountered an error. Please consult the
service log for more details.
I have attached snippets from the Engine engine.log from this morning.
If any other logs are needed for to help diagnosis I can provide them.
--
Brian Ismay
SR. Systems Administrator
jismay(a)cenic.org
----
engine.log: NOTE, the system clock is in UTC, local time is PDT, so this
occurred at 07:48AM local time.
2017-08-09 14:48:37,237 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [2cea1ef7] VM
'69880324-2d2e-4a70-8071-4ae0f0ae342e'(vm1) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:37,277 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler6) [2cea1ef7] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm1 is not responding.
2017-08-09 14:48:37,277 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [2cea1ef7] VM
'4471e3ee-9f69-4903-b68f-c1293aea047f'(vm2) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:37,282 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler6) [2cea1ef7] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm2 is not responding.
2017-08-09 14:48:38,326 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler5) [cf129f7] VM
'35fd4afa-12a1-4326-9db5-a86939a01fa8'(vm3) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:38,360 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [cf129f7] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm3 is not responding.
2017-08-09 14:48:38,360 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler5) [cf129f7] VM
'd83e9633-3597-4046-95ee-2a166682b85e'(vm4) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:38,365 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [cf129f7] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm4 is not responding.
2017-08-09 14:48:49,075 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler8) [3b1149ff] VM
'd41984d0-4418-4991-9af0-25593abac976'(vm5) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:49,130 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler8) [3b1149ff] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm5 is not responding.
2017-08-09 14:48:49,131 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler8) [3b1149ff] VM
'ed87b37d-5b79-4105-ba89-29a59361eb4e'(vm6) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:49,136 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler8) [3b1149ff] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm6 is not responding.
2017-08-09 14:48:52,221 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [2973c87] VM
'506980f4-6764-4cc6-bb20-c1956d8ed201'(vm7) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:52,226 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler7) [2973c87] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm7 is not responding.
2017-08-09 14:48:52,299 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [2cea1ef7] VM
'69880324-2d2e-4a70-8071-4ae0f0ae342e'(vm1) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:48:52,300 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [2cea1ef7] VM
'4471e3ee-9f69-4903-b68f-c1293aea047f'(vm2) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:48:53,373 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler5) [cf129f7] VM
'638b2aab-e4f7-43e0-a2a8-95c75813e669'(vm8) moved from 'Up' -->
'NotResponding'
2017-08-09 14:48:53,379 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [cf129f7] Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: VM vm8 is not responding.
2017-08-09 14:48:54,380 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [2cea1ef7] VM
'35fd4afa-12a1-4326-9db5-a86939a01fa8'(vm3) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:48:54,381 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler6) [2cea1ef7] VM
'd83e9633-3597-4046-95ee-2a166682b85e'(vm4) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:49:04,197 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [2973c87] VM
'd41984d0-4418-4991-9af0-25593abac976'(vm5) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:49:04,198 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [2973c87] VM
'ed87b37d-5b79-4105-ba89-29a59361eb4e'(vm6) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:49:07,293 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler8) [3b1149ff] VM
'506980f4-6764-4cc6-bb20-c1956d8ed201'(vm7) moved from 'NotResponding'
--> 'Up'
2017-08-09 14:49:09,388 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler7) [2973c87] VM
'638b2aab-e4f7-43e0-a2a8-95c75813e669'(vm8) moved from 'NotResponding'
--> 'Up'
7 years, 3 months
How to shutdown an oVirt cluster with Gluster and hosted engine
by Moacir Ferreira
--_000_DB6P190MB0280B3878C85268D0254D229C8B50DB6P190MB0280EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
I have installed a oVirt cluster in a KVM virtualized test environment. Now=
, how do I properly shutdown the oVirt cluster, with Gluster and the hosted=
engine?
I.e.: I want to install a cluster of 3 servers and then send it to a remote=
office. How do I do it properly? I noticed that glusterd is not enabled to=
start automatically. And how do I deal with the hosted engine?
Thanks,
Moacir
--_000_DB6P190MB0280B3878C85268D0254D229C8B50DB6P190MB0280EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p>I have installed a oVirt cluster in a KVM virtualized test environment. =
Now, how do I <span>properly shutdown the oVirt cluster, with Gluster =
and the hosted engine</span>?</p>
<p>I.e.: I want to install a cluster of 3 servers and then send it to a rem=
ote office. How do I do it properly? I noticed that glusterd is not enabled=
to start automatically. And how do I deal with the hosted engine?</p>
<p><br>
</p>
<p>Thanks,<br>
</p>
<p>Moacir<br>
</p>
</div>
</body>
</html>
--_000_DB6P190MB0280B3878C85268D0254D229C8B50DB6P190MB0280EURP_--
7 years, 3 months
Issues getting agent working on Ubuntu 17.04
by Wesley Stewart
I am having trouble getting the ovirt agent working on Ubuntu 17.04
(perhaps it just isnt there yet)
Currently I have two test machines a 16.04 and a 17.04 ubuntu servers.
*On the 17.04 server*:
Currently isntalled:
ovirt-guest-agent (1.0.12.2.dfsg-2), and service --status-all reveals a few
virtualization agents:
[ - ] open-vm-tools
[ - ] ovirt-guest-agent
[ + ] qemu-guest-agent
I can't seem to start ovirt-guest-agent
sudo service ovirt-guest-agent start/restart does nothing
Running *sudo systemctl status ovirt-guest-agent.service*
Aug 08 15:31:50 ubuntu-template systemd[1]: Starting oVirt Guest Agent...
Aug 08 15:31:50 ubuntu-template systemd[1]: Started oVirt Guest Agent.
Aug 08 15:31:51 ubuntu-template python[1219]: *** stack smashing detected
***: /usr/bin/python terminated
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: Main
process exited, code=killed, status=6/ABRT
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service: Unit
entered failed state.
Aug 08 15:31:51 ubuntu-template systemd[1]: ovirt-guest-agent.service:
Failed with result 'signal'.
*sudo systemctl enable ovirt-guest-agent.service*
Also does not seem to do antyhing.
Doing more research, I found:
http://lists.ovirt.org/pipermail/users/2017-July/083071.html
So perhaps the ovirt-guest-agent is broken for Ubuntu 17.04?
*On the 16.04 Server I have:*
Took some fiddling, but I eventually got it working
7 years, 3 months
Good practices
by Moacir Ferreira
--_000_DB6P190MB0280A69BAE2A377274B72375C8B40DB6P190MB0280EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU =
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use Gluste=
rFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dua=
l 10Gb NIC. So my intention is to create a loop like a server triangle usin=
g the 40Gb NICs for virtualization files (VMs .qcow2) access and to move VM=
s around the pod (east /west traffic) while using the 10Gb interfaces for g=
iving services to the outside world (north/south traffic).
This said, my first question is: How should I deploy GlusterFS in such oVir=
t scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then=
create a GlusterFS using them?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not cons=
uming too much disk space?
4 - Does a oVirt hypervisor pod like I am planning to build, and the virtua=
lization environment, benefits from tiering when using a SSD disk? And yes,=
will Gluster do it by default or I have to configure it to do so?
At the bottom line, what is the good practice for using GlusterFS in small =
pods for enterprises?
You opinion/feedback will be really appreciated!
Moacir
--_000_DB6P190MB0280A69BAE2A377274B72375C8B40DB6P190MB0280EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p><span>I am willing to assemble a oVirt "pod", made of 3 server=
s, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The id=
ea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual=
40Gb NIC and a dual 10Gb NIC. So my intention
is to create a loop like a server triangle using the 40Gb NICs for virtual=
ization files (VMs .qcow2) access and to move VMs around the pod (east /wes=
t traffic) while using the 10Gb interfaces for giving services to the outsi=
de world (north/south traffic).</span></p>
<p><br>
<span></span></p>
<p>This said, my first question is: How should I deploy GlusterFS in such o=
Virt scenario? My questions are:</p>
<p><br>
</p>
<p>1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and t=
hen create a GlusterFS using them?</p>
<p>2 - Instead, should I create a JBOD array made of all server's disks?</p=
>
<p>3 - What is the best Gluster configuration to provide for HA while not c=
onsuming too much disk space?<br>
</p>
<p>4 - Does a oVirt hypervisor pod like I am planning to build, and the vir=
tualization environment, benefits from tiering when using a SSD disk? And y=
es, will Gluster do it by default or I have to configure it to do so?</p>
<p><br>
</p>
<p>At the bottom line, what is the good practice for using GlusterFS in sma=
ll pods for enterprises?<br>
</p>
<p><br>
</p>
<p>You opinion/feedback will be really appreciated!</p>
<p>Moacir<br>
</p>
</div>
</body>
</html>
--_000_DB6P190MB0280A69BAE2A377274B72375C8B40DB6P190MB0280EURP_--
7 years, 3 months
Re: [ovirt-users] Users Digest, Vol 71, Issue 37
by Moacir Ferreira
--_000_DB6P190MB02803D87B263C1D3C3672993C8B50DB6P190MB0280EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Fabrice,
If you choose to have jumbo frames all over, then when the traffic goes out=
side of your "jumbo frames" enabled network it will be necessary to be frag=
mented back again to the destination MTU. Most of the datacenters will prov=
ide services to the outside world where the MTU is 1500 bytes. In this case=
, you will slow down your performance because your router will be doing the=
fragmentation. So I would always use jumbo frames in the datacenter for ea=
st/west traffic and standard (1500 bytes) for north/south traffic.
Moacir
----------------------------------------------------------------------
Message: 1
Date: Mon, 7 Aug 2017 21:50:36 +0200
From: Fabrice Bacchella <fabrice.bacchella(a)orange.fr>
To: FERNANDO FREDIANI <fernando.frediani(a)upx.com>
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Good practices
Message-ID: <4365E3F7-4C77-4FF5-8401-1CDA2F0029EE(a)orange.fr>
Content-Type: text/plain; charset=3D"windows-1252"
>> Moacir: Yes! This is another reason to have separate networks for north/=
south and east/west. In that way I can use the standard MTU on the 10Gb NIC=
s and jumbo frames on the file/move 40Gb NICs.
Why not Jumbo frame every where ?
7 years, 3 months
Move VM from FC storage cluster to local-storage in another cluster
by Neil
Hi guys,
I need to move a VM from one cluster (cluster1) using FC storage with 4
hosts, to a separate cluster (cluster 2) with only 1 NEW host that has
local storage only.
What would be the best way to do this?
All I aim to achieve is to have a single NEW host that has local storage
that I can run a single VM on, which is manageable via oVirt, so even if it
means adding the NEW host as a separate DC, how can I copy or move (not
live) the VM to this new host?
I've tried exporting the VM to an export domain on cluster1, but I can't
seem to figure out how to "attach" the export domain to cluster2 with the
NEW host.
If I go to "Import VM" on cluster2, I get a message saying "Not available
when no export domain is active" if I try and attach the same export domain
that was used to export the VM in cluster1, it says I can't because it's
already assigned to the cluster, so I'm really confused as to how to go
about doing this.
Any help and guidance is appreciated.
Thanks.
Regards.
Neil Wilson.
7 years, 3 months
Re: [ovirt-users] Good practices
by Moacir Ferreira
--_000_DB6P190MB0280AD888032AB4A6232D576C88A0DB6P190MB0280EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Ok, the 40Gb NIC that I got were for free. But anyway, if you were working =
with 6 HDD + 1 SSD per server, then you get 21 disks on your cluster. As da=
ta in a JBOD will be built all over the network, then it can be really inte=
nsive especially depending on the number of replicas you choose for your ne=
eds. Also, when moving a VM alive you must transfer the memory contents of =
a VM to another node (just think about moving a VM with 32GB RAM). All toge=
ther, it can be a quite large chunk of data moving over the network all the=
time. While 40Gb NIC is not a "must", I think it is more affordable as it =
cost much less then a good disk controller.
But my confusion is that, as said by other fellows, the best "performance m=
odel" is when you use a hardware RAIDed brick (i.e.: 5 or 6) to assemble yo=
ur GlusterFS. In this case, as I would have to buy a good controller but ha=
ve less network traffic, to lower the cost I would then use a separate netw=
ork made of 10Gb NICs plus the controller.
Moacir
>
> > Le 8 ao?t 2017 ? 04:08, FERNANDO FREDIANI <fernando.frediani(a)upx.com> a
> ?crit :
>
> > Even if you have a Hardware RAID Controller with Writeback cache you
> will have a significant performance penalty and may not fully use all the
> resources you mentioned you have.
> >
>
> Nope again,from my experience with HP Smart Array and write back cache,
> write, that goes in the cache, are even faster that read that must goes t=
o
> the disks. of course if the write are too fast and to big, they will over
> overflow the cache. But on todays controller they are multi-gigabyte cach=
e,
> you must write a lot to fill them. And if you can afford 40Gb card, you c=
an
> afford decent controller.
>
The last sentence raises an excellent point: balance your resources. Don't
spend a fortune on one component while another will end up being your
bottleneck.
Storage is usually the slowest link in the chain. I personally believe that
spending the money on NVMe drives makes more sense than 40Gb (except [1],
which is suspiciously cheap!)
Y.
[1] http://a.co/4hsCTqG
--_000_DB6P190MB0280AD888032AB4A6232D576C88A0DB6P190MB0280EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr">
<p>Ok, the 40Gb NIC that I got were for free. But anyway, if you were worki=
ng with 6 HDD + 1 SSD per server, then you get 21 disks on your cluster=
. As data in a JBOD will be built all over the network, then it can be real=
ly intensive especially depending on
the number of replicas you choose for your needs. Also, when moving a VM a=
live you must transfer the memory contents of a VM to another node (just th=
ink about moving a VM with 32GB RAM). All together, it can be a quite large=
chunk of data moving over the network
all the time. While 40Gb NIC is not a "must", I think it is more=
affordable as it cost much less then a good disk controller.</p>
<p><br>
</p>
<p>But my confusion is that, as said by other fellows, the best "perfo=
rmance model" is when you use a hardware RAIDed brick (i.e.: 5 or 6) t=
o assemble your GlusterFS. In this case, as I would have to buy a good cont=
roller but have less network traffic, to lower
the cost I would then use a separate network made of 10Gb NICs plus the co=
ntroller.</p>
<p><br>
</p>
<p>Moacir<br>
</p>
<div style=3D"color: rgb(49, 55, 57);"><font size=3D"2"><span style=3D"font=
-size:10pt;">
<div class=3D"PlainText"><br>
<br>
<br>
><br>
> > Le 8 ao?t 2017 ? 04:08, FERNANDO FREDIANI <fernando.frediani@u=
px.com> a<br>
> ?crit :<br>
><br>
> > Even if you have a Hardware RAID Controller with Writeback cache =
you<br>
> will have a significant performance penalty and may not fully use all =
the<br>
> resources you mentioned you have.<br>
> ><br>
><br>
> Nope again,from my experience with HP Smart Array and write back cache=
,<br>
> write, that goes in the cache, are even faster that read that must goe=
s to<br>
> the disks. of course if the write are too fast and to big, they will o=
ver<br>
> overflow the cache. But on todays controller they are multi-gigabyte c=
ache,<br>
> you must write a lot to fill them. And if you can afford 40Gb card, yo=
u can<br>
> afford decent controller.<br>
><br>
<br>
The last sentence raises an excellent point: balance your resources. Don't<=
br>
spend a fortune on one component while another will end up being your<br>
bottleneck.<br>
Storage is usually the slowest link in the chain. I personally believe that=
<br>
spending the money on NVMe drives makes more sense than 40Gb (except [1],<b=
r>
which is suspiciously cheap!)<br>
<br>
Y.<br>
[1] <a href=3D"http://a.co/4hsCTqG" id=3D"LPlnk802834" previewremoved=3D"tr=
ue">http://a.co/4hsCTqG</a><br>
<br>
</div>
</span></font></div>
</div>
</body>
</html>
--_000_DB6P190MB0280AD888032AB4A6232D576C88A0DB6P190MB0280EURP_--
7 years, 3 months
Domain name in use? After failed domain setup?
by Schorschi .
Domain name in use? After failed domain setup?
I attempted to create a new domain, but I did not realize the master
domain was 100% initialized. The new domain creation failed. But it
appears the new domain 'name' was used. Now I cannot create the new
domain as expected. I get UI error that states, "" which can only be
true if the domain name is in the database, because it is definitely no
visible in the UI. This is quite frustrated, because it appears the new
domain 'creation' logic is broken, if the new domain fails to be
created, the database should not have junk domain name, right? I call
this an ugly bug. That said, I really need to remove this junk domain
name so I can use the correct name as expected.
Thanks.
7 years, 3 months