Error creating a storage domain (On Cisco UCS Only)
by nico.kruger@darkmatter.ae
Hi Guys,
I am trying to install a new cluster... I currently have a 9 node and two 6 node oVirt Clusters... (these were installed on 4.1 and upgraded to 4.2)
So i want to build a new cluster, which is working fine on this HP notebook i use for testing (using single node gluster deployment)
But when i try install this on my production servers which are Cisco UCS servers i keep getting this error:
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Error creating a storage domain]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Error creating a storage domain]\". HTTP response code is 400."}
This happens during storage creation after the hosted-engine is built and after gluster has been deployed (error happens for both single and 3 replica deployments)
I just cant see how an install on one type of server is successful but not on the UCS servers (which i am running my other ovirt clusters on)
BTW i dont think the issue is related to Gluster Storage Create as i tried using NFS and Local storage and get the same error (on UCS server only)
I am using the ovirt-node-ng-installer-4.2.0-2019011406.el7.iso install ISO
Below is a tail from ovirt-hosted-engine-setup-ansible-create_storage_domain Log file
2019-01-27 11:09:49,754+0400 INFO ansible ok {'status': 'OK', 'ansible_task': u'Fetch Datacenter name', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 'ansible_type': 'task'}
2019-01-27 11:09:49,754+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f9f9b7e2d50> kwargs
2019-01-27 11:09:50,478+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'Add NFS storage domain', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 'ansible_type': 'task'}
2019-01-27 11:09:50,479+0400 DEBUG ansible on_any args TASK: Add NFS storage domain kwargs is_conditional:False
2019-01-27 11:09:51,151+0400 DEBUG var changed: host "localhost" var "otopi_storage_domain_details_nfs" type "<type 'dict'>" value: "{
"changed": false,
"skip_reason": "Conditional result was False",
"skipped": true
}"
2019-01-27 11:09:51,151+0400 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u'Add NFS storage domain', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 'ansible_type': 'task'}
2019-01-27 11:09:51,151+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f9f9b7e2610> kwargs
2019-01-27 11:09:51,820+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'Add glusterfs storage domain', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 'ansible_type': 'task'}
2019-01-27 11:09:51,821+0400 DEBUG ansible on_any args TASK: Add glusterfs storage domain kwargs is_conditional:False
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var "otopi_storage_domain_details_gluster" type "<type 'dict'>" value: "{
"changed": false,
"exception": "Traceback (most recent call last):\n File \"/tmp/ansible_ovirt_storage_domain_payload_Xous24/__main__.py\", line 682, in main\n ret = storage_domains_module.create()\n File \"/tmp/ansible_ovirt_storage_domain_payload_Xous24/ansible_ovirt_storage_domain_payload.zip/ansible/module_utils/ovirt.py\", line 587, in create\n **kwargs\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 24225, in add\n return self._internal_add(storage_domain, headers, query, wait)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232, in _internal_add\n return future.wait() if wait else future\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in wait\n return self._code(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in callback\n self._check_fault(response)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in _check_fault\n
self._raise_error(response, body)\n File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in _raise_error\n raise error\nError: Fault reason is \"Operation Failed\". Fault detail is \"[Error creating a storage domain]\". HTTP response code is 400.\n",
"failed": true,
"msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Error creating a storage domain]\". HTTP response code is 400."
}"
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var "ansible_play_hosts" type "<type 'list'>" value: "[]"
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var "play_hosts" type "<type 'list'>" value: "[]"
2019-01-27 11:10:02,045+0400 DEBUG var changed: host "localhost" var "ansible_play_batch" type "<type 'list'>" value: "[]"
2019-01-27 11:10:02,046+0400 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u'Add glusterfs storage domain', 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True, u\'exception\': u\'Traceback (most recent call last):\\n File "/tmp/ansible_ovirt_storage_domain_payload_Xous24/__main__.py", line 682, in main\\n ret = storage_domains_module.create()\\n File "/tmp/ansible_ovirt_storage_domain_payload_Xous24/ansible_ovirt_storage_domain_pay', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml'}
2019-01-27 11:10:02,046+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f9f9b859f50> kwargs ignore_errors:None
2019-01-27 11:10:02,048+0400 INFO ansible stats {'status': 'FAILED', 'ansible_playbook_duration': 39.840701, 'ansible_result': u"type: <type 'dict'>\nstr: {u'localhost': {'unreachable': 0, 'skipped': 2, 'ok': 13, 'changed': 0, 'failures': 1}}", 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/create_storage_domain.yml', 'ansible_type': 'finish'}
2019-01-27 11:10:02,048+0400 DEBUG ansible on_any args <ansible.executor.stats.AggregateStats object at 0x7f9f9da59a50> kwargs
This issue is driving me crazy... any assistance on what i can do would be greatly appreciated
Nico
5 years, 9 months
qemu-img info showed iscsi/FC lun size 0
by jingjie.jiang@oracle.com
Hi,
Based on oVirt 4.3.0, I have data domain from FC lun, then I create new vm on the disk from FC data domain.
After VM was created. According to qemu-img info, the disk size is 0.
# qemu-img info /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
image: /rhev/data-center/mnt/blockSD/eaa6f641-6b36-4c1d-bf99-6ba77df3156f/images/8d3b455b-1da4-49f3-ba57-8cda64aa9dc9/949fa315-3934-4038-85f2-08aec52c1e2b
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 0
I tried on iscsi and same result.
Is the behaviour expected?
Thanks,
Jingjie
5 years, 9 months
oVirt Performance (Horrific)
by drew.rash@gmail.com
Hi everybody, my coworker and I have some decent hardware that would make great single servers, and then we through in a 10gb switch with 2x10gbps cards in 4 boxes.
We have 2 excellent boxes - super micro board (disabled speed step) with 14 core intel i9's 7940x's, 128 GB ram (3200), 1tb m.2 samsung 870 evo's, 1tb ssd samsung, 1 8TB WD Gold, 1 6TB WD gold.
Then we have 2 boxes (1 with 8 core i7-9700k, 1 with 6 core i7-8700) 128GB ram in one, 64GB ram in the other all 3000MHz with the same 1tb ssd,6tb wd gold, 8tb wd gold drives as the other boxes and 10gbps cards.
Our problem is performance. We used the slower boxes for KVM(libvirt) and FreeNAS at first which was great performance wise. Then we bought the new super micro boxes and converted to oVirt + Gluster and did some basic write test using dd writing zero's to files from 1GB up to 50GB and were happy with the numbers writing directly to the gluster. But then we stuck a windows VM on it and turned it on...I'll stop there..because turning it on stopped any performance testing. This thing blew goat cheese. It was so slow the oVirt guest agent doesn't even start along with MS SQL server engine sometimes and other errors.
So naturally, we removed the gluster from the equation. We took one of the 8TB WD Gold drives, made it a linux NFS share and gave it to oVirt to put VM's on as an NFS Domain. Just a single drive. Migrated the disk with the fresh windows 10 installation to it configured as VirtIO-SCSI, and booted the vm with 16GB ram, 8:1:1 cpu's. To our surprise it still blew. Just ran a winsat disk -drive c: for example purposes and the spice viewer repeatedly freezing, had the resource monitor open watching the 10,000ms disk response times with results...results were I rebooted because the results disappeared I didn't run it as administrator. And opening a command prompt is painful, the disk is still in use The task manager has no words on it. Disk is writing like 1MBps. command prompt finally showed up and looked blank with the cursor offset with no words anywhere.
So the reboot took .. Well turning off took 2 minutes. Booting took 6 minutes 30 seconds ish. Logging in: 1m+
So 9-10 minutes to reboot and log back in a fresh windows install. Then 2 minutes to open a command prompt, task manager and resource monitor.
During the write test disk i/o on the vm was less than 8, from the graph looks like 6MBps. Network traffic is like 20Mbps average, cpu is near zero, a couple spikes up to 30MBps on the disk. I ran this same thing on my disk and it finished in <1m. Ran it on the vm...still running after 30 minutes. I'll wait for the results to post them here. Ok It's been 30 minutes and it's still writing. I don't see the writes in the resource monitor, windows is doing a bunch of random app updates or something with candy crush on a fresh install and, ok so I hit enter a bunch of times on the prompt and it moved down to a flush-seq...and now it shows up in the resource monitor doing something again...I just ran this on my pc and it finished in less than a minute... whatever it's doing it's almost running at 1MB/s
I think something went wrong because it only shows like 2 minutes passing at any test and then a total of 37 minutes. And at no time did the windows resource graphs or any of the oVirt node system graphs show more than like 6MB/s and definitely not 50 or 1GBps... flat out lies here.
C:\Windows\system32>winsat disk -drive c
Windows System Assessment Tool
> Running: Feature Enumeration ''
> Run Time 00:00:00.00
> Running: Storage Assessment '-drive c -ran -read'
> Run Time 00:00:12.95
> Running: Storage Assessment '-drive c -seq -read'
> Run Time 00:00:20.59
> Running: Storage Assessment '-drive c -seq -write'
> Run Time 00:02:04.56
> Run Time 00:02:04.56
> Running: Storage Assessment '-drive c -flush -seq'
> Run Time 00:01:02.75
> Running: Storage Assessment '-drive c -flush -ran'
> Run Time 00:01:50.20
> Dshow Video Encode Time 0.00000 s
> Dshow Video Decode Time 0.00000 s
> Media Foundation Decode Time 0.00000 s
> Disk Random 16.0 Read 5.25 MB/s 5.1
> Disk Sequential 64.0 Read 1220.56 MB/s 8.6
> Disk Sequential 64.0 Write 53.61 MB/s 5.5
> Average Read Time with Sequential Writes 22.994 ms 1.9
> Latency: 95th Percentile 85.867 ms 1.9
> Latency: Maximum 325.666 ms 6.5
> Average Read Time with Random Writes 29.548 ms 1.9
> Total Run Time 00:37:35.55
I even ran it again and it had the exact same results. So I'll try copying a file from a 1gbps network location with an ssd to this pc. It's a 4GB CentOS7 ISO to the desktop. It said 20MB/s then up to 90MB/s...then it dropped after doing a couple gigs. 2.25 gigs to go and it's going at 2.7MB/s with some fluctuations up to 5MB. So to drive this home at the same time... I copied the file off the same server to another server with ssd disks and it ran at 100MB/s which is what I'd expect over a 1gbps network.
All this said, we do have an ssd gluster 2+1 arbiter (which seemed fastest when we tested different variations) on the 1TB ssd's. I was able to do reads from the array inside a VM at 550MB/s which is expected for an ssd. We did dd writing zero's and got about 550MB/s also from the ovirt node. But inside a VM the best we get is around ~10MB/s writing.
Basically done the same testing using windows server 2016, boot terrible, opening applications terrible. But with sql server running off the ssd gluster I can read at 550MB/s but writing is horrific somewhere around 2-10MB/s.
Latency between the nodes with ping is 100us ish. The hardware should be able to do 200MB/s HDD's, 550MB/s SSD's but it doesn't. And it's evident in every writing scenario inside a vm. Migrating VM's is also at this speed.
Gluster healing seems to run faster, we've seen it conusme 7-9 Gbps. So I feel this is an oVirt issue and not gluster. Especially since all the tests above are the same when using an NFS mount on the box running the VM in oVirt.
Please guide me. I can post pictures and such if needed, logs whatever. Just ask.
5 years, 9 months
Suggestions on adding 1000 or more networks to an engine/hosts
by Brian Wilson
So we have a use case where our engines will will be hosting development sandbox clusters. We need to have upwards of 1000 networks on some of these.
I understand there is no theoretical limit to the number of networks does any body have a good reliable way of instantiating all of these networks.
We have been using ansible for POCs and doing 100 Networks have not had a problem getting them on there however when upping the number to many more it begins to take longer and longer between each one and eventually timed iout and only got to 1567
Example Task we are using for this:
tasks:
- name: Add More Networks
ovirt_network:
auth: "{{ ovirt_auth }}"
data_center: "{{ pcloud_name }}"
name: "{{ pcloud_name }}-{{ item }}"
state: present
label: uplink
vlan_tag: "{{ item }}"
clusters:
- name: "{{ ovirt_cluster }}"
assigned: yes
required: no
display: no
migration: no
gluster: no
with_sequence: start=1551 end=2500
What are some better ways of bulk additions of networks? Would the API provide a better solution so as to sorta bulk cache them then initiate the save?
5 years, 9 months
Thank you for your reply.
by Afzzal Ahmed
Assaalamu Alaikkum my dear friend, I amMr Afzzal Ahmed, the chief operating officer with my bank and I want to informyou that an amount of US$37.3 million will be moved on your name as the ForeignBusiness Partner to our latedeceased customer Mr.Berry Bryan Floyd, I need yourhelp to receive this money as we shall share the money in the ratio of 60:40%.You will receive this amount through a bank wire transfer. Please send your full names, direct telephonenumbers, and home address, more details of how to claim the form will be givenupon your reply. You can reply me through my private e-mail ID : afzzallahmed(a)aol.com your quick response will be highly appreciated. Yours sincerely,Mr.Afzzal Ahmed
5 years, 9 months
[Users] Can't access RHEV-H aka ovirt-node
by Scotto Alberto
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: multipart/alternative;
boundary="_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_"
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi all,
I can't login to the hypervisor, neither as root nor as admin, neither from=
another computer via ssh nor directly on the machine.
I'm sure I remember the passwords. This is not the first time it happens: l=
ast time I reinstalled the host. Everything worked ok for about 2 weeks, an=
d then...
What's going on? Is it a known behavior, somehow?
Before rebooting the hypervisor, I would like to try something. RHEV Manage=
r talks to RHEV-H without any problems. Can I login with RHEV-M's keys? how=
?
Thank you all.
Alberto Scotto
[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto(a)reply.it
www.reply.it
________________________________
--
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information by persons or entities other than t=
he intended recipient is prohibited. If you received this in error, please =
contact the sender and delete the material from any computer.
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
{color:blue;
text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
{color:purple;
text-decoration:underline}
span.EmailStyle17
{font-family:"Calibri","sans-serif";
color:windowtext}
.MsoChpDefault
{font-family:"Calibri","sans-serif"}
@page WordSection1
{margin:70.85pt 2.0cm 2.0cm 2.0cm}
div.WordSection1
{}
-->
</style>
</head>
<body lang=3D"IT" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi all,</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I can’t login to the hype=
rvisor, neither as root nor as admin, neither from another computer via ssh=
nor directly on the machine.</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I’m sure I remember the p=
asswords. This is not the first time it happens: last time I reinstalled th=
e host. Everything worked ok for about 2 weeks, and then...</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">What’s going on? Is it a =
known behavior, somehow?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Before rebooting the hypervisor=
, I would like to try something. RHEV Manager talks to RHEV-H without any p=
roblems. Can I login with RHEV-M’s keys? how?</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"> </span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Thank you all.</span></p>
</div>
<br>
<br>
<div align=3D"left">
<p style=3D"font-family:Calibri,Sans-Serif; font-size:10pt"><span style=3D"=
color:#000000; font-weight:bold">Alberto Scotto</span>
<span style=3D"color:#808080"></span><br>
<br>
<span style=3D"color:#000000"><img border=3D"0" alt=3D"Blue" src=3D"cid:bde=
5ac62d10545908e269a6006dbd5ac" style=3D"margin:0px">
</span><br>
<span style=3D"color:#808080">Via Cardinal Massaia, 83<br>
10147 - Torino - ITALY <br>
phone: +39 011 29100 <br>
<a href=3D"al.scotto(a)reply.it" target=3D"" style=3D"color:blue; text-decora=
tion:underline">al.scotto(a)reply.it</a>
<br>
<a title=3D"" href=3D"www.reply.it" target=3D"" style=3D"color:blue; text-d=
ecoration:underline">www.reply.it</a>
</span><br>
</p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
--<br>
The information transmitted is intended for the person or entity to which i=
t is addressed and may contain confidential and/or privileged material. Any=
review, retransmission, dissemination or other use of, or taking of any ac=
tion in reliance upon, this information
by persons or entities other than the intended recipient is prohibited. If=
you received this in error, please contact the sender and delete the mater=
ial from any computer.<br>
</font>
</body>
</html>
--_000_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_
Content-Type: image/png; name="blue.png"
Content-Description: blue.png
Content-Disposition: inline; filename="blue.png"; size=2834;
creation-date="Tue, 11 Sep 2012 14:14:44 GMT";
modification-date="Tue, 11 Sep 2012 14:14:44 GMT"
Content-ID: <bde5ac62d10545908e269a6006dbd5ac>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAAIwAAAAyCAYAAACOADM7AAAABmJLR0QA/gD+AP7rGNSCAAAACXBI
WXMAAA3XAAAN1wFCKJt4AAAACXZwQWcAAACMAAAAMgCR0D3bAAAKaUlEQVR42u2ce5AUxRnAf313
3Al4eCAYFaIgyMNEUF6KlYoVIDBArDxqopWxQgViQlWsPHA0MUlZVoyKRsdSE4lGomjIaHS0UlHL
wTIPpEgQFQUUjYIWdfIIScyBHi/Z6/zRM1xP3yzs7t3unOX8qra2H9M9vb3f9Pf19/WukFKSk1Mq
dVkPIOejRS4wOWXR6wVGuP5I4foDsh5HjkL0VhtGuP5A4CFgNrAD+Lb0nKeyHtfHnd68wixGCQvA
qcA9wvWPy3pQH3caan1D4fonAYeBDwEZjaFflAaok56zHRhsNG0B+gAHSrhHarn0nFp/3NLnxbKP
B06I5kECO2UYZD2sLtRcYIBJwK+BoYBACU89cAjoAIRw/TuAJcClQGy//FJ6zvvH6ly4/qXAz4vU
HQA2A4H0nIcz+OxH41eAHaU3AhdkPaA0MrFhhOuPB2YA5wBnA6ehni5dgKcBu4C5wLZS7Rfh+g8A
80u49HHgEuk5h2s+AeaYLbsO2AKMiIqWyzBYkPW40shihUF6zkbUUwSAcP0G4FHgS9pl10rPmQMs
LbXfSBVNLPHyrwDfBO7JYg4MRqEempjnsh5QMXqL0Xsl8EUt3w5cXUE/w4AztfzzwGSUGrwoyuvM
yfqDR5yLUssxL2U9oGJkssLoCNdfjLJXdBZIz9lQQXcTgSYt/4z0nHjy1wvX3wW8oNX3O8q4TgKm
AGegjNB/As9JzzmYer1lTwKGoOyyV2UYtArLngLMQ9lh64EVRQxZ3V5pje4V9zsVGBRl22QYrDXu
e0HUvwD+K8NgXbe/lKOQqcAI178MuM0ovk16zqMVdjnNyL9g5E2DrTVlTP1RRvM3gIFG9RvC9RdK
z/lHoo2yQQJgeFR0hbDsT6FUns544Icp456qpV+RYaAL5RJgepR+FWXzxfcdA6zRrr0SqKrAZKaS
hOt/DbjXKH5Geo7bjW71iT8AvGLUzzXyfzfGNBBlPyymq7AAjAWeFK5/slE+AvhklC4At6KEZb9x
3cJo+9x5T8s+ERinFa012uzU0vuMuu9r6W3AXd2Yu5LIRGCE618E/D6l6rpu9Hk8MEEr2iQ9p1Wr
n4wShJgPgCeMbh6g02jeB9wILASe1q4ZBHzBaDeRThukHghRdskoQF+NmlH+JJ0JqB1ijCkw72np
jiOfx7JPQrkdYm6QYXBMH1V3qYlKEq7fhNLvw1CTeztK55rcJlz/s8XshGPwaeBELd8sXP961Bd4
Bsqo1u2bm6Tn7NbGeCHKMI6ZLz3nsajuT6gtfjxfpxr31lXhThkG8470a9mrtPp2uq4652np94FN
Rr0uMM1a+jI6fVTvAMsrmLOy6VGBEa5fB3wOpctHaK9TgVOAxmN0MRXlwPpWBbefYuTHAj8tcu39
0nNuMMq+qqXfjoUl4mSSq/HbRlv9S3/ZqBumpXcB/zPqz9fSm2UY/Nuo1wWmCUBYdiPwHa3ck2Hw
YQVzVjbVWGFmkW7YmewDfga8CNwHnB6VXyZcf7X0nAfLvG8pntE3gSXSc5an1Olf+hDh+i+jVieJ
UiOxwBSiMQMgLLsFOEtr+7xWB8rQjdkgw0BXK40o1RWTZrDu0dKx0X4xylMOynZZVuZcVUyPCoz0
nA7gR8L1N6FWmQIqZtRGpwoSwF7gRek5WwCE658P3A9Y0TV3C9ffUOrWOlrZdIfdXuBhlCqaqZU/
myYs0RZaNzybUV7oNFqBt7T8BJJ2iW6zDAPGFKkDGE1yBTLtF0gKTCF6/4FWtsTYVVWVqtgw0nNW
lHn9LmCOcP2bgKuAvsAtqNWqFGLVF7NGes4i4fpjgNfpFNbzi7QfD/TX8vtQMa40VkvPKWh5fWfW
DuhCfg5Ju8nc5k/RxpZYuTR0gWkTlj0D5YgEeJca2S4xvcXTC4D0nKvpdNWXc2hqEiqSHROrhR0k
bYAzhesPTmmvG61tKAE6PXoNRRnTg6OX6VvRhfB1GQa7tbyu5v6D8qNQpH4bsDVlbLrADACu0fK/
qOXqAr1MYCLip7AcI+48I78WIIpuv6mVN5NUPWntN0nP2So9p016ThtwEKU6RpIMOyAsuw9JVWiu
INO19AYZBma0fbKWXi/DoEBX9tBpu4wDLozS2+jqx6o6vVFgYt+JKKON/pTvJ6kWzKc6LTg5XEtv
MeruAF5DqbZVgH6IayTJoOHf4oSw7LNICuKTeqfCsj9BUnhN+yamPXqZc3JrLfwuJpnHklKIBaa+
lIuF67eQ3KW8HtlEMabhPCmlG/3JnhX5ZHaifDeLtLqlxpmcySQfuvnCstdH6WXaZ9iPMsJ1xpOM
ZaXZL6DsqfcB3UO8A7WzrDm9T2DqG7dTOHSIEgUGIc5GyhatZJ1Rv4HkmZ/xKb08o5UPRa0UkuQT
vY6uQVJTFc5D7fQ6SNpUN8ow2GVcq7sB2ugq2DGHUYfLdG6SYbCPDMhcYIRlJwWjcGg/Z1/yATBE
zJxXT0Pf4o0P7pWcO39W4nuVHS+JGfPq6dMXOjpgzNyt9En0MUF877fDee3x1iPlo2beTOPxnwGh
qzahuhUAjwCLpOeYKkDfIT2BUl1XkxT2+2QYXJ8yen0H+JYMgz2kY9o126mh38UkITBRYGwp5e1Q
usNjwL/Ql3VRX2D35mUI0UB90wyOZmc19i+wa+NB+vTrnMA9re00RO3q6iRbVtYxeOzt1NXHS3od
e96dRkPT6CN9v/HUIRr738Dg0bMRDSdQVzeAjsJh+ra8SfMpf5S3XNzFoSYsewhJVbhKhoEnLDtE
HV4vRGXPprQFFTdrRklk2u4opoVkyMOTYbCfjEgc0RSWPQhlQ/SruMfymCrD4IXud1N7In+ILgzT
ZRj8tYfvcSLwOzoPer0DjKv1VlrHVEltqBhMafZD99mR1QfvAXT1tYfiNkhZCMvuD1yLCtbORsXg
Yi7PUljAEJgoztFaYV8fN8yg4XsV95TkLJS32+QaGQZPl9tZT5O50ftRJLL1Pq8V9cjqEjHdyG8D
rpdhkJmhq5MLTGX0QR2diLdnYQ/2vRq1wsRe6nUyDNq712XP0Wt/W53TO+mNoYGcXkwuMDll0eM2
TPRbnGnAvaaDSVj2bOA0GQY1j7Lm9AzVWGG+jIrwphlH3wXuzvpD51RONXZJ7aizLFcIyx4O3CXD
IN527kUdJAJAWPbFqBXnVmHZV6FO3K+I6oahzgYPAX7T017UnMqoxgpTQAniONRJ/AeFZRc72+IA
P47SPwEWAAjLbgL+jPJ1NAF/EZZd6o/sc6pINQSmARAyDL6OOm45mmSoX+cDVDiC6D0+azI0arcS
FSkG9fcgORlTbcfdXtR5jqOdnpPGO3QK8nzU33KsoutvgXIyoBorjP7FN6OEsph3sE6rq9fS8RmQ
RTIMTgP+QPJsbk5GVENgjgMQlv0QcDnwBp0nxgaQ/O+6dmCUsOxHUGdj459kbI/a3Sksew3qjE5L
1pOVUx2VtBJljxxAhf3v0v4TZRnKmI25ObruLdTZkvcAZBgcEpY9E3BRu6TrZBisznqycvJYUk6Z
5KGBnLLIBSanLHKBySmLXGByyiIXmJyy+D/P9uGVPOu6DAAAACh6VFh0U29mdHdhcmUAAHja801M
LsrPTU3JTFRwyyxKLc8vyi5WsAAAYBUIJ4KDNosAAAAASUVORK5CYII=
--_004_C8B8517ADA90DB40A482797D59EB83896419FE83CED01MBXS08repl_--
5 years, 9 months
Ovirt Node 4.3 Gluster Install adding bricks
by pollard@tx.net
Sorry if this seems simple, but trial and error is how I learn. So the basics. I installed Node 4.3 on 3 hosts, and was following the setup for self-hosted engine. The setup fails when detecting peers and indicates that they are already part of a cluster. So i restart the install and choose the install engine on single node with Gluster. I now have all three nodes connected to ovirt engine, however my trouble is I am not understanding how to add the disk of the two nodes to the glusterFS. I also have some extra disks on my primary node I want to add. I believe it is as follows but I dont want to mess this up.
Under Compute >> Hosts >> $HOSTNAME
Select Storage Devices
Select Disk in my case sde
Then create brick?
If this is the case Do I add it to a LV if so which one engine,data or vmstore?
Do I repeat this for each hosts?
I can duplicate the engine, data and vmstore on each one, but that will still leave me with two disks on each node without assignment
If anyone can help that would be great.
Thank you.
Pollard
5 years, 9 months
Unable to make Single Sign on working on Windows 7 Guest
by Felipe Herrera Martinez
On the case I'll be able to create an installer, what is the name of the Application need to be there, in order to ovirt detects that Ovirt Guest agent is installed?
I have created an installer adding OvirtGuestService files and the Product Name to be shown, a part of the command line post installs..
I have tried with "ovirt-guest-agent" and "Ovirt guest agent" Names for the application installed on Windows 7 guest and even both are presented on ovirt VM Applications tab,
on any case LogonVDScommand appears.
There is other option to make it work now?
Thanks in advance,
Felipe
5 years, 9 months
Re: [ovirt-users] Need VM run once api
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 00199D2865257E91_=
Content-Type: text/plain; charset="US-ASCII"
Can any one help on this.
Thanks & Regards
Chandrahasa S
From: Chandrahasa S/MUM/TCS
To: users(a)ovirt.org
Date: 28-07-2015 15:20
Subject: Need VM run once api
Hi Experts,
We are integrating ovirt with our internal cloud.
Here we installed cloudinit in vm and then converted vm to template. We
deploy template with initial run parameter Hostname, IP Address, Gateway
and DNS.
but when we power ON initial, run parameter is not getting pushed to
inside the vm. But its working when we power on VM using run once option
on Ovirt portal.
I believe we need to power ON vm using run once API, but we are not able
get this API.
Can some one help on this.
I got reply on this query last time but unfortunately mail got deleted.
Thanks & Regards
Chandrahasa S
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 00199D2865257E91_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Can any one help on this.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Chandrahasa S/MUM/TCS</font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">users(a)ovirt.org</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">28-07-2015 15:20</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Need VM run
once api</font>
<br>
<hr noshade>
<br>
<br><font size=2 face="sans-serif">Hi Experts,</font>
<br>
<br><font size=2 face="sans-serif">We are integrating ovirt with our internal
cloud.</font>
<br>
<br><font size=2 face="sans-serif">Here we installed cloudinit in vm and
then converted vm to template. We deploy template with initial run parameter
Hostname, IP Address, Gateway and DNS.</font>
<br>
<br><font size=2 face="sans-serif">but when we power ON initial, run parameter
is not getting pushed to inside the vm. But its working when we power on
VM using run once option on Ovirt portal.</font>
<br>
<br><font size=2 face="sans-serif">I believe we need to power ON vm using
run once API, but we are not able get this API.</font>
<br>
<br><font size=2 face="sans-serif">Can some one help on this.</font>
<br>
<br><font size=2 face="sans-serif">I got reply on this query last time
but unfortunately mail got deleted.</font>
<br>
<br><font size=2 face="sans-serif">Thanks & Regards<br>
Chandrahasa S<br>
</font>
<br><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 00199D2865257E91_=--
5 years, 9 months
Re: [ovirt-users] Problem Windows guests start in pause
by Dafna Ron
Hi Lucas,
Please send mails to the list next time.
can you please do rpm -qa |grep qemu.
also, can you try a different windows image?
Thanks,
Dafna
On 07/14/2014 02:03 PM, lucas castro wrote:
> On the host there I've tried to run the vm, I use a centOS 6.5
> and checked, no update for qemu, libvirt or related package.
--
Dafna Ron
5 years, 9 months