Upgrade 4.1 to 4.2 lsot gfapi Disk Access for VM's
by Ralf Schenk
Hello,
after upgradeing my whole cluster of 8 Hosts to ovirt 4.2.5 and setting
compability of cluster and datacenter to 4.2 my existing virtual
machines start using FUSE mounted disk-images on my gluster volumes.
One bad and slow thing I thought that I got rid of starting with 4.1.x
finally !
Disk definition from virsh -r dumpxml VM of old running vm (not
restarted yet !):
<disk type='network' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='threads'/>
<source protocol='gluster'
name='gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/d5f3657f-ac7a-4d89-8a83-e7c47ee0ef05/f99c6cb4-1791-4a55-a0b9-2ff0ec1a4dd7'>
<host name='glusterfs.mylocal.domain' port='24007'/>
</source>
<backingStore/>
<target dev='sda' bus='scsi'/>
<serial>d5f3657f-ac7a-4d89-8a83-e7c47ee0ef05</serial>
<boot order='1'/>
<alias name='scsi0-0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
Disk definition from virsh -r dumpxml VM of new started running vm:
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='threads'/>
<source
file='/rhev/data-center/mnt/glusterSD/glusterfs.mylocal.domain:_gv0/5d99af76-33b5-47d8-99da-1f32413c7bb0/images/1f0db0ef-a6af-4e3e-90e6-f681d071496b/1b3c3a
<backingStore/>
<target dev='sda' bus='scsi'/>
<serial>1f0db0ef-a6af-4e3e-90e6-f681d071496b</serial>
<boot order='1'/>
<alias name='ua-1f0db0ef-a6af-4e3e-90e6-f681d071496b'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
How do I get back my gfapi gluster-based Disks ?
--
*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *rs(a)databay.de* <mailto:rs@databay.de>
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* <http://www.databay.de>
Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen
------------------------------------------------------------------------
6 years, 3 months
info about changing engine ip mode from dhcp to static
by Gianluca Cecchi
Hello,
a colleague of mine installed oVirt engine 4.2.5 on a test CentOS 7.5
system and in its initial configuration he configured its ip via dhcp
(using static ip assignment configured at dhcp server side).
Now he would like to maintain the same ip/hostname for the engine but with
a static configuration on the server.
Any hint about how to achieve this?
Thanks in advance,
Gianluca
6 years, 3 months
Network issues with oVirt 4.2 and cloud-init
by Berger, Sandy
--_000_DM5PR05MB316161E1C7E2EB8EA09A76A0D5D40DM5PR05MB3161namp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
We're using cloud-init to customize VMs built from a template. We're using =
static IPV4 settings so we're specifying an IP address, subnet mask, and ga=
teway. There is apparently a bug in the current version of cloud-init shipp=
ing as part of CentOS 7.4 (https://bugzilla.redhat.com/show_bug.cgi?id=3D14=
92726) that fails to set the gateway properly. In the description of the bu=
g, it says it is fixed in RHEL 7.5 but also says one can use https://people=
.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64.rpm which is =
what we're doing.
When the new VM first boots, the 3 IPv4 settings are all set correctly. Reb=
oots of the VM maintain the settings properly. But, if the VM is shut down =
and started again via the oVirt GUI, all of the IPV4 settings on the eth0 v=
irtual NIC are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 shows=
that the NIC is now set up for DHCP.
Are we doing something incorrectly?
Sandy Berger
IT - Infrastructure Engineer II
Quad/Graphics
Performance through Innovation
Sussex, Wisconsin
414.566.2123 phone
414.566.4010/2123 pager/PIN
sandy.berger(a)qg.com<mailto:sandy.berger@qg.com>
www.QG.com<http://www.qg.com/>
Follow Us: Facebook<http://www.qg.com/social1> | Twitter<http://www.qg.com/=
social2> | LinkedIn<http://www.qg.com/social3> | YouTube<http://www.qg.com/=
social4>
--_000_DM5PR05MB316161E1C7E2EB8EA09A76A0D5D40DM5PR05MB3161namp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">We’re using cloud-init to customize VMs built =
from a template. We’re using static IPV4 settings so we’re spec=
ifying an IP address, subnet mask, and gateway. There is apparently a bug i=
n the current version of cloud-init shipping as part
of CentOS 7.4 (<a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D14=
92726">https://bugzilla.redhat.com/show_bug.cgi?id=3D1492726</a>) that fail=
s to set the gateway properly. In the description of the bug, it says it is=
fixed in RHEL 7.5 but also says one can
use <a href=3D"https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7=
.9-20.el7.x86_64.rpm">
https://people.redhat.com/rmccabe/cloud-init/cloud-init-0.7.9-20.el7.x86_64=
.rpm</a> which is what we’re doing.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">When the new VM first boots, the 3 IPv4 settings are=
all set correctly. Reboots of the VM maintain the settings properly. But, =
if the VM is shut down and started again via the oVirt GUI, all of the IPV4=
settings on the eth0 virtual NIC
are lost and the /etc/sysconfig/network-scripts/ifcfg-eth0 shows that the =
NIC is now set up for DHCP.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Are we doing something incorrectly?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:"=
;Arial",sans-serif;color:#1E47A4">Sandy Berger
</span></b><span style=3D"font-size:10.5pt;font-family:"Arial",sa=
ns-serif;color:#444444"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><i><span style=3D"font-size:10.0pt;font-family:"=
;Arial",sans-serif;color:#434D5B">IT – Infrastructure Engineer I=
I</span></i><span style=3D"font-size:10.5pt;font-family:"Arial",s=
ans-serif;color:#444444"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt;font-family:"Ar=
ial",sans-serif;color:#444444"> <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:#1E47A4">Quad/Graphics</span><span style=3D"font=
-size:10.5pt;font-family:"Arial",sans-serif;color:#444444"><o:p><=
/o:p></span></p>
<p class=3D"MsoNormal"><i><span style=3D"font-size:8.0pt;font-family:"=
Arial",sans-serif;color:#434D5B">Performance</span></i><span style=3D"=
font-size:8.0pt;font-family:"Arial",sans-serif;color:#434D5B">&nb=
sp;through </span><i><span style=3D"font-size:8.5pt;font-family:"=
Arial",sans-serif;color:#434D5B">Innovation</span></i><span style=3D"f=
ont-size:10.5pt;font-family:"Arial",sans-serif;color:#444444"><o:=
p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt;font-family:"Ar=
ial",sans-serif;color:#434D5B"> </span><span style=3D"font-size:1=
0.5pt;font-family:"Arial",sans-serif;color:#444444"><o:p></o:p></=
span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:#434D5B">Sussex, Wisconsin</span><span style=3D"=
font-size:10.5pt;font-family:"Arial",sans-serif;color:#444444"><o=
:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:#434D5B">414.566.2123 phone</span><span style=3D=
"font-size:10.5pt;font-family:"Arial",sans-serif;color:#444444"><=
o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:#434D5B">414.566.4010/2123 pager/PIN</span><span=
style=3D"font-size:10.5pt;font-family:"Arial",sans-serif;color:#=
444444"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.5pt;color:black"> <=
/span><span style=3D"font-size:10.5pt;font-family:"Arial",sans-se=
rif;color:#444444"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:"Ar=
ial",sans-serif;color:black"><a href=3D"mailto:sandy.berger@qg.com"><s=
pan style=3D"color:blue">sandy.berger(a)qg.com</span></a></span><span style=
=3D"font-size:10.5pt;font-family:"Arial",sans-serif;color:#444444=
"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"font-family:"Arial",sans-se=
rif;color:black"><a href=3D"http://www.qg.com/"><span style=3D"font-size:10=
.0pt;color:blue">www.QG.com</span></a></span><span style=3D"font-size:10.5p=
t;font-family:"Arial",sans-serif;color:#444444"><o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
Follow Us: <a href=3D"http://www.qg.com/social1">Facebook</a> | <a href=3D"=
http://www.qg.com/social2">
Twitter</a> | <a href=3D"http://www.qg.com/social3">LinkedIn</a> | <a href=
=3D"http://www.qg.com/social4">
YouTube</a>
</body>
</html>
--_000_DM5PR05MB316161E1C7E2EB8EA09A76A0D5D40DM5PR05MB3161namp_--
6 years, 3 months
Poor performance of oVirt Export Domain
by Aleksey.I.Maksimov@yandex.ru
Hello
I deployed a dedicated server (fs05.holding.com) on CentOS 7.5 and created a VDO volume on it.
The local write test in the VDO volume on this server gives us an acceptable result
# dd if=/dev/zero of=/mnt/vdo-vd1/nfs/testfile count=1000000
1000000+0 records in
1000000+0 records out
512000000 bytes (512 MB) copied, 6.26545 s, 81.7 MB/s
The disk capacity of the VDO volume is connected to the oVirt 4.2.5 cluster as the Export Domain via NFS.
I'm seeing a problem with the low performance of Export Domain.
Snapshots of virtual machines are copied very slowly to the Export Domain, approximately 6-8 MB/s.
At the same time, if I try to run a write test in an mounted NFS-directory on any of the oVit cluster hosts, I get about 50-70 MB/s.
# dd if=/dev/zero of=/rhev/data-center/mnt/fs05.holding.com:_mnt_vdo-vd1_nfs_ovirt-vm-backup/testfile count=10000000
10000000+0 records in
10000000+0 records out
5120000000 bytes (5.1 GB) copied, 69.5506 s, 73.6 MB/s
Help me understand the reason for the slow copying to Export Domain on oVirt.
6 years, 3 months
oVirt 4.2.5 : VM snapshot creation does not work : command HSMGetAllTasksStatusesVDS failed: Could not acquire resource
by Aleksey Maksimov
Hello
We use oVirt 4.2.5.2-1.el7 (Hosted engine / 4 hosts in cluster / about twenty virtual machines)
Virtual machine disks are located on the Data Domain from FC SAN.
Snapshots of all virtual machines are created normally. But for one virtual machine, we can not create a snapshot.
When we try to create a snapshot in the oVirt web console, we see such errors:
Aug 13, 2018, 1:05:06 PM Failed to complete snapshot 'KOM-APP14_BACKUP01' creation for VM 'KOM-APP14'.
Aug 13, 2018, 1:05:01 PM VDSM KOM-VM14 command HSMGetAllTasksStatusesVDS failed: Could not acquire resource. Probably resource factory threw an exception.: ()
Aug 13, 2018, 1:05:00 PM Snapshot 'KOM-APP14_BACKUP01' creation for VM 'KOM-APP14' was initiated by petya@sub.holding.com(a)sub.holding.com-authz.
At this time on the server with the role of "SPM" in the vdsm.log we see this:
...
2018-08-13 05:05:06,471-0500 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call VM.getStats succeeded in 0.00 seconds (__init__:573)
2018-08-13 05:05:06,478-0500 INFO (jsonrpc/5) [jsonrpc.JsonRpcServer] RPC call Image.deleteVolumes succeeded in 0.05 seconds (__init__:573)
2018-08-13 05:05:06,478-0500 INFO (tasks/3) [storage.ThreadPool.WorkerThread] START task bb45ae7e-77e9-4fec-9ee2-8e1f0ad3d589 (cmd=<bound method Task.commit of <vdsm.storage.task.Task instance at 0x7f06b85a2128>>, args=None) (threadPool:208)
2018-08-13 05:05:07,009-0500 WARN (tasks/3) [storage.ResourceManager] Resource factory failed to create resource '01_img_6db73566-0f7f-4438-a9ef-6815075f45ea.cdf1751b-64d3-42bc-b9ef-b0174c7ea068'. Canceling request. (resourceManager:543)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 539, in registerResource
obj = namespaceObj.factory.createResource(name, lockType)
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 193, in createResource
lockType)
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceFactories.py", line 122, in __getResourceCandidatesList
imgUUID=resourceName)
File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 213, in getChain
if srcVol.isLeaf():
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1430, in isLeaf
return self._manifest.isLeaf()
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 139, in isLeaf
return self.getVolType() == sc.type2name(sc.LEAF_VOL)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 135, in getVolType
self.voltype = self.getMetaParam(sc.VOLTYPE)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 119, in getMetaParam
meta = self.getMetadata()
File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 112, in getMetadata
md = VolumeMetadata.from_lines(lines)
File "/usr/lib/python2.7/site-packages/vdsm/storage/volumemetadata.py", line 103, in from_lines
"Missing metadata key: %s: found: %s" % (e, md))
MetaDataKeyNotFoundError: Meta Data key not found error: ("Missing metadata key: 'DOMAIN': found: {'NONE': '######################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################'}",)
2018-08-13 05:05:07,010-0500 WARN (tasks/3) [storage.ResourceManager.Request] (ResName='01_img_6db73566-0f7f-4438-a9ef-6815075f45ea.cdf1751b-64d3-42bc-b9ef-b0174c7ea068', ReqID='3d924e5e-60d1-47b0-86a7-c63585b56f09') Tried to cancel a processed request (resourceManager:187)
2018-08-13 05:05:07,010-0500 ERROR (tasks/3) [storage.TaskManager.Task] (Task='bb45ae7e-77e9-4fec-9ee2-8e1f0ad3d589') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
return method(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1966, in deleteVolume
with rm.acquireResource(img_ns, imgUUID, rm.EXCLUSIVE):
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 1025, in acquireResource
return _manager.acquireResource(namespace, name, lockType, timeout=timeout)
File "/usr/lib/python2.7/site-packages/vdsm/storage/resourceManager.py", line 475, in acquireResource
raise se.ResourceAcqusitionFailed()
ResourceAcqusitionFailed: Could not acquire resource. Probably resource factory threw an exception.: ()
2018-08-13 05:05:07,059-0500 INFO (tasks/3) [storage.ThreadPool.WorkerThread] FINISH task bb45ae7e-77e9-4fec-9ee2-8e1f0ad3d589 (threadPool:210)
2018-08-13 05:05:07,246-0500 INFO (jsonrpc/1) [root] /usr/libexec/vdsm/hooks/after_get_caps/50_openstacknet: rc=0 err= (hooks:110)
2018-08-13 05:05:07,660-0500 INFO (jsonrpc/1) [root] /usr/libexec/vdsm/hooks/after_get_caps/openstacknet_utils.py: rc=0 err= (hooks:110)
2018-08-13 05:05:08,152-0500 INFO (jsonrpc/1) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err= (hooks:110)
Please help us to solve this problem.
6 years, 3 months
Clarifications regarding ovirt-provider-ovn network
by Anurag Porripireddi
Hi,
I had some questions a overt-provider-ovn based external network which is built on top of an physical network. I notice that VLAN tagging gets greyed out when I try to create an external provider network. Is it not supported by overt right now?I am on version 4.2.5.3. Also, if I create NICs with the external provider network built atop a physical network on VMs that are on different hosts, but the same cluster, I see they are unable to ping each other, though it seems to work fine if they are on the same host. Is that elected behaviour? Could you you please help me understand why?
Thanks,
Anurag
6 years, 3 months
Hosted Engine Deployment Fails - Ansible Playbook
by jeremy_tourville@hotmail.com
Can someone help me troubleshoot what my issue is? I have tried to setup hosted engine using static and DHCP addresses and both times it fails. Many thanks in advance!
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[root@vmh /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp96s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovirtmgmt state UP group default qlen 1000
link/ether 0c:c4:7a:f9:b9:88 brd ff:ff:ff:ff:ff:ff
3: enp96s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 0c:c4:7a:f9:b9:89 brd ff:ff:ff:ff:ff:ff
20: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:c4:7a:f9:b9:88 brd ff:ff:ff:ff:ff:ff
inet 172.30.50.2/24 brd 172.30.50.255 scope global dynamic ovirtmgmt
valid_lft 84266sec preferred_lft 84266sec
inet6 fe80::ec4:7aff:fef9:b988/64 scope link
valid_lft forever preferred_lft forever
21: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:e2:1c:89 brd ff:ff:ff:ff:ff:ff
inet 192.168.124.1/24 brd 192.168.124.255 scope global virbr0
valid_lft forever preferred_lft forever
22: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:e2:1c:89 brd ff:ff:ff:ff:ff:ff
23: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:52:19:f4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe52:19f4/64 scope link
valid_lft forever preferred_lft forever
24: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 82:43:87:7f:eb:62 brd ff:ff:ff:ff:ff:ff
[root@vmh /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/onn_vmh-ovirt--node--ng--4.2.5.1--0.20180731.0+1 145G 3.1G 134G 3% /
devtmpfs 63G 0 63G 0% /dev
tmpfs 63G 4.0K 63G 1% /dev/shm
tmpfs 63G 19M 63G 1% /run
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/mapper/onn_vmh-var 15G 4.5G 9.4G 33% /var
/dev/mapper/onn_vmh-tmp 976M 4.1M 905M 1% /tmp
/dev/mapper/onn_vmh-home 976M 2.6M 907M 1% /home
/dev/mapper/PNY_CS900_240GB_SSD_PNY14182241350103ED2p1 976M 365M 545M 41% /boot
/dev/mapper/onn_vmh-var_log 7.8G 70M 7.3G 1% /var/log
/dev/mapper/onn_vmh-var_log_audit 2.0G 9.2M 1.8G 1% /var/log/audit
/dev/mapper/3600605b00a2faca222fb4da81ac9bdb1p1 7.4T 93M 7.1T 1% /srv
tmpfs 13G 0 13G 0% /run/user/0
I have pasted my log file- https://pastebin.com/dyYksxaC
6 years, 3 months
Issue with NFS and Storage domain setup
by Inquirer Guy
Hi Ovirt,
I successfully installed both ovirt-engine(ENGINE01) and ovirt node(NODE01)
on a separate machines. I also created a FreeNAS(NAS01) with NFS share and
already connected to my NODE01, all of these server though I haven't setup
a DNS server, was manually added hostname on every machines and I can
lookup and ping on them without a problem, I was able to add the NODE01 to
my ENGINE01 as well.
My issue was when I tried creating a storage domain on my ENGINE01, I did
the below steps before running the engine-setup while also following the
guide on the ovirt url:
https://www.ovirt.org/documentation/how-to/troubleshooting/troubleshootin...
#touch /etc/exports
#systemctl start rpcbind nfs-server
#systemctl enable rpcbind nfs-server
#engine-setup
#mkdir /var/lib/exports/data
#chown vdsm:kvm /var/lib/exports/data
I added the 2 just in case but I have tried each alone but all fails
#vi /etc/exports
/var/lib/exports/data
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
/var/lib/exports/data 0.0.0.0/0.0.0.0(rw)
#systemctl restart rpc-statd nfs-server
Once I started to add my storage domain I get the below error
Attached is the engine log for your reference.
Hope you guys can help me with these, Im really interested with this great
product. Thanks!
6 years, 3 months