I can't create disk with ovirt 4.3.2
by siovelrm@gmail.com
Hello, I just installed ovirt 4.3.2, in Self-Hosted mode, all the same as in previous versions. It happens that when I want to create a disk with a user that is not the admin I get the following error. "Error while executing action Add Disk to VM: Internal Engine Error" This happens to me with all the other users except the admin even when those users are also Super User. Please I need Help. Engine logs say the following
2019-04-05 15: 34: 09,977-04 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Running command: AddDiskCommand internal: false. Entities affected: ID: c76a5059-f891-496a-b45f-7ba7ea878ceb Type: StorageAction group CREATE_DISK with role type USER 2019-04-05 15: 34: 10,002-04 WARN [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Validation of action 'AddImageFromScratch' failed for user jdoe @ internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN, NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE 2019-04-05 15: 34: 10,070-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] EVENT_ID: USER_FAILED_ADD_DISK (2,023), Add-Disk operation failed (User: jdoe @ internal-authz). 2019-04-05 15: 34: 10,432-04 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsEx
ecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-8) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Command ' AddDisk 'id:' 8b10a2b8-3a38-45a4-9c08-7e742eca001b 'child commands' [bcfaa199-dee6-4ae4-9404-9b75cd8e9339]' executions were completed, status' FAILED ' 2019-04-05 15: 34: 11,461-04 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [37026e0a-92e6-4bfa-9f0f- f052d9eced2d] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure. 2019-04-05 15: 34: 11,471-04 ERROR [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [37026e0a-92e6-4bfa- 9f0f-f052d9eced2d] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' with failure. 2019-04-05 15: 34: 11,493-04 WARN [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM is null
- not unlocking 2019-04-05 15: 34: 11,523-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE (2,022), Add-Disk operation failed to complete.
6 years
assign quota to a user with the api rest services of ovirt
by racolina@softel.cu
Hi
I am running version 4.3.2 of ovirt and I am trying to assign a user to a quota with the services of api rest of ovirt. I can not find any service or reference between the services that help me to make the user assignment to quota.
Please help
Thanks
6 years
cinderlib: VM migration fails
by Matthias Leopold
Hi,
after I successfully started my first VM with a cinderlib attached disk
in oVirt 4.3.2 I now want to test basic operations. I immediately
learned that migrating this VM (OS disk: iSCSI, 2nd disk: Managed Block)
fails with a java.lang.NullPointerException (see below) in engine.log.
This even happens when the cinderlib disk is deactivated.
Shall I report things like this here, shall I open a bug report or shall
I just wait because the feature is under development?
thx
Matthias
2019-04-08 12:57:40,250+02 INFO
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
(default task-66) [4ef05101] cinderlib output: {"driver_volume_type":
"rbd", "data": {"secret_type": "ceph", "name":
"ovirt-test/volume-2f053070-f5b7-4f04-856c-87a56d70cd75",
"auth_enabled": true, "keyring": "[client.ovirt-test_user_rbd]\n\tkey =
xxx\n", "cluster_name": "ceph", "secret_uuid": null, "hosts":
["xxx.xxx.216.45", "xxx.xxx.216.54", "xxx.xxx.216.55"], "volume_id":
"2f053070-f5b7-4f04-856c-87a56d70cd75", "discard": true,
"auth_username": "ovirt-test_user_rbd", "ports": ["6789", "6789", "6789"]}}
2019-04-08 12:57:40,256+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] START,
AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01,
AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456',
vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'}), log
id: 67d3a79e
2019-04-08 12:57:40,262+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] Failed in
'AttachManagedBlockStorageVolumeVDS' method, for vds: 'ov-test-04-01';
host: 'ov-test-04-01.foo.bar': null
2019-04-08 12:57:40,262+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] Command
'AttachManagedBlockStorageVolumeVDSCommand(HostName = ov-test-04-01,
AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='59efbbfe-904a-4c43-9555-b544f77bb456',
vds='Host[ov-test-04-01,59efbbfe-904a-4c43-9555-b544f77bb456]'})'
execution failed: null
2019-04-08 12:57:40,262+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand]
(default task-66) [4ef05101] FINISH,
AttachManagedBlockStorageVolumeVDSCommand, return: , log id: 67d3a79e
2019-04-08 12:57:40,310+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-66) [4ef05101] EVENT_ID: VM_MIGRATION_FAILED(65),
Migration failed (VM: ovirt-test01.srv, Source: ov-test-04-03).
2019-04-08 12:57:40,314+02 INFO
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
[4ef05101] Lock freed to object
'EngineLock:{exclusiveLocks='[4a8c9902-f9ab-490f-b1dd-82d9aee63b5f=VM]',
sharedLocks=''}'
2019-04-08 12:57:40,314+02 ERROR
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
[4ef05101] Command 'org.ovirt.engine.core.bll.MigrateVmCommand' failed:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
java.lang.NullPointerException (Failed with error ENGINE and code 5001)
2019-04-08 12:57:40,314+02 ERROR
[org.ovirt.engine.core.bll.MigrateVmCommand] (default task-66)
[4ef05101] Exception: javax.ejb.EJBException:
org.ovirt.engine.core.common.errors.EngineException: EngineException:
java.lang.NullPointerException (Failed with error ENGINE and code 5001)
6 years
remote-viewer can not display console properly
by pxb@zj.sgcc.com.cn
I installed ovirt 4.2.8 on centos7,hosts and vms runs norml. However, the virtual machine console often does not display properly. When I open the console, only "connected to graphic server" is displayed.
6 years
4.3.2 Problem with prestarted vm in pool
by kiv@intercom.pro
Hi all!
Create pool 10 vm from template win10.
Type - automatic
Prestarted vm - 2
Add user UserVM from AD, and add rights - User role.
Login to VM portal under account UserVM - VM shutdown state. If click "run" - then VM starts and works.
In the administration portal 2 vm from this pool in work state.
6 years
Re: [oVirt Jenkins] ovirt-system-tests_hc-basic-suite-master - Build # 1068 - Still Failing!
by Gobinda Das
Hi Simone/Ido,
Could you please help me here? Do we need to pass any value specifically
for content_type ?
On Mon, Apr 8, 2019 at 11:45 AM Gobinda Das <godas(a)redhat.com> wrote:
> There is a new field called "content_type" introduced in
> ovirt-ansible-hosted-engine-setup role which is causing problem.
>
> *07:32:47* TASK [ovirt.hosted_engine_setup : Add HE disks] *********************************07:32:48* failed: [localhost] (item={u'content': u'hosted_engine', u'description': u'Hosted-Engine disk', u'sparse': True, u'format': u'raw', u'size': u'61GiB', u'name': u'he_virtio_disk'}) => {"changed": false, "item": {"content": "hosted_engine", "description": "Hosted-Engine disk", "format": "raw", "name": "he_virtio_disk", "size": "61GiB", "sparse": true}, "msg": "Unsupported parameters for (ovirt_disk) module: content_type Supported parameters include: auth, bootable, description, download_image_path, fetch_nested, force, format, id, image_provider, interface, logical_unit, name, nested_attributes, openstack_volume_type, poll_interval, profile, quota_id, shareable, size, sparse, sparsify, state, storage_domain, storage_domains, timeout, upload_image_path, vm_id, vm_name, wait"}
>
>
> Right now I have no clue why it saying Unsupported parameters, In code i can see the content type is constant value i.e. 'hosted_engine','hosted_engine_sanlock','hosted_engine_configuration' and 'hosted_engine_metadata'
>
>
> Ref: https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/commit/94499ec...
>
>
> On Mon, Apr 8, 2019 at 10:19 AM Sahina Bose <sabose(a)redhat.com> wrote:
>
>> Can you review rhis?
>> Both 4.3 and master seem to be failing
>>
>> On Mon, Apr 8, 2019 at 7:33 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
>> >
>> > Project:
>> http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/
>> > Build:
>> http://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/1068/
>> > Build Number: 1068
>> > Build Status: Still Failing
>> > Triggered By: Started by timer
>> >
>> > -------------------------------------
>> > Changes Since Last Success:
>> > -------------------------------------
>> > Changes for Build #1066
>> > [Eitan Raviv] hostlib: report host mgmt net and its attachment data
>> >
>> >
>> > Changes for Build #1067
>> > [Eitan Raviv] hostlib: report host mgmt net and its attachment data
>> >
>> >
>> > Changes for Build #1068
>> > [Dominik Holler] basic-suite-master: Use 384MB memory for VM0
>> >
>> > [Evgheni Dereveanchin] Make standard-manual-runner run concurrently
>> >
>> >
>> >
>> >
>> > -----------------
>> > Failed Tests:
>> > -----------------
>> > No tests ran.
>>
>
>
> --
>
>
> Thanks,
> Gobinda
>
--
Thanks,
Gobinda
6 years
Trying to reset password for ovirt wiki
by noc
This is a multi-part message in MIME format.
--------------000005070002050708050606
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hoping someone can help me out.
For some reason I keep getting the following error when I try to reset
my password:
Reset password
* Error sending mail: Failed to add recipient: jvandewege(a)nieuwland.nl
[SMTP: Invalid response code received from server (code: 554,
response: 5.7.1 <jvandewege(a)nieuwland.nl>: Relay access denied)]
Complete this form to receive an e-mail reminder of your account details.
Since I receive the ML on this address it is definitely a working address.
Tried my home account too and same error but then for my home provider,
Relay denied ??
A puzzled user,
Joop
--------------000005070002050708050606
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hoping someone can help me out.<br>
For some reason I keep getting the following error when I try to
reset my password:<br>
<br>
<fieldset><legend>Reset password</legend>
<div class="error">
<ul>
<li>Error sending mail: Failed to add recipient:
<a class="moz-txt-link-abbreviated" href="mailto:jvandewege@nieuwland.nl">jvandewege(a)nieuwland.nl</a> [SMTP: Invalid response code
received from server (code: 554, response: 5.7.1
<a class="moz-txt-link-rfc2396E" href="mailto:jvandewege@nieuwland.nl"><jvandewege(a)nieuwland.nl></a>: Relay access denied)]</li>
</ul>
</div>
<p>Complete this form to receive an e-mail reminder of your
account details.<br>
</p>
</fieldset>
<br>
Since I receive the ML on this address it is definitely a working
address.<br>
Tried my home account too and same error but then for my home
provider, Relay denied ??<br>
<br>
A puzzled user,<br>
<br>
Joop<br>
<br>
</body>
</html>
--------------000005070002050708050606--
6 years
ovirt-guest-agent issue on rhel5.5
by John Michael Mercado
Hi All,
I need your help. Anyone who encounter the below error and have the
solution? Can you help me how to fix this?
MainThread::INFO::2015-01-27
10:22:53,247::ovirt-guest-agent::57::root::Starting oVirt guest agent
MainThread::ERROR::2015-01-27
10:22:53,248::ovirt-guest-agent::138::root::Unhandled exception in oVirt
guest agent!
Traceback (most recent call last):
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 132, in ?
agent.run(daemon, pidfile)
File "/usr/share/ovirt-guest-agent/ovirt-guest-agent.py", line 63, in run
self.agent = LinuxVdsAgent(config)
File "/usr/share/ovirt-guest-agent/GuestAgentLinux2.py", line 371, in
__init__
AgentLogicBase.__init__(self, config)
File "/usr/share/ovirt-guest-agent/OVirtAgentLogic.py", line 171, in
__init__
self.vio = VirtIoChannel(config.get("virtio", "device"))
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 150, in
__init__
self._stream = VirtIoStream(vport_name)
File "/usr/share/ovirt-guest-agent/VirtIoChannel.py", line 131, in
__init__
self._vport = os.open(vport_name, os.O_RDWR)
OSError: [Errno 2] No such file or directory:
'/dev/virtio-ports/com.redhat.rhevm.vdsm'
Thanks
6 years
[Users] oVirt Workshop at LinuxCon Japan 2012
by Leslie Hawthorn
Hello everyone,
As part of our efforts to raise awareness of and educate more developers
about the oVirt project, we will be holding an oVirt workshop at
LinuxCon Japan, taking place on June 8, 2012. You can find full details
of the workshop agenda on the LinuxCon Japan site. [0]
Registration for the workshop is now open and is free of charge for the
first 50 participants. We will also look at adding additional
participant slots to the workshop based on demand.
Attendees who register for LinuxCon Japan via the workshop registration
link [1] will also be eligible for a discount on their LinuxCon Japan
registration.
Please spread the word to folks you think would find the workshop
useful. If they have already registered for LinuxCon Japan, they can
simply edit their existing registration to include the workshop.
[0] -
https://events.linuxfoundation.org/events/linuxcon-japan/ovirt-gluster-wo...
[1] - http://www.regonline.com/Register/Checkin.aspx?EventID=1099949
Cheers,
LH
--
Leslie Hawthorn
Community Action and Impact
Open Source and Standards @ Red Hat
identi.ca/lh
twitter.com/lhawthorn
6 years
Re: huge page in ovirt 4.2.7
by Sharon Gratch
Hi Fabrice,
The "hugepages" custom property value in oVirt should be set to *size of
the pages in KiB* (i.e. 1GiB = 1048576, 2MiB = 2048).
In addition, it is recommended to set the huge page size of the VM to the
largest size supported by the host.
In the configuration you sent, the huge page size of the VM is set to 64
KiB and since the VM's allocated memory size is at least 32,768 MiB then it
requires at least (32768 * 1024/64=) 524,288 pages. Since you only have 120
pages declared in the host then it failed with an error "...there are not
enough free huge pages to run the VM".
So to solve the problem, please change the VM's huge page size to be the
same as the host's huge page supported size which is 1GiB and therefore
hugepages value should be 1048576 KiB instead of 64:
<custom_property>
<name>hugepages</name>
<value>*1048576*</value>
</custom_property>
Please note that since total VM's memory size is no more than 64 GB then
only 64 pages will be needed by the VM and it's < 120 pages supported by
the host and therefore OK.
Hope it helped.
Regards,
Sharon
On Wed, Nov 14, 2018 at 2:11 PM, Fabrice Bacchella <
fabrice.bacchella(a)orange.fr> wrote:
> I'm trying to understand huge page in oVirt, I'm quite sure to understand
> it well.
>
> I have an host with 128GiB. I have configured reserved huge page:
>
> cat /proc/cmdline
>
> ... hugepagesz=1GB hugepages=120
>
> $ grep -r . /sys/kernel/mm/hugepages
> /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_overcommit_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages:120
> /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages_mempolicy:120
> /sys/kernel/mm/hugepages/hugepages-1048576kB/surplus_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-1048576kB/resv_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages:120
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_overcommit_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages_mempolicy:0
> /sys/kernel/mm/hugepages/hugepages-2048kB/surplus_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-2048kB/resv_hugepages:0
> /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages:0
>
> I have a big VM running on it:
> <custom_properties>
> <custom_property>
> <name>hugepages</name>
> <value>64</value>
> </custom_property>
> </custom_properties>
> <memory>68719476736</memory>, aka 65536 MiB
> <memory_policy>
> <guaranteed>34359738368</guaranteed>, aka 32768 MiB
> <max>68719476736</max>
> </memory_policy>
>
> And it keep failing when I want to start it:
> /var/log/ovirt-engine/engine.log:2018-11-14 12:56:06,937+01 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-66) [13c13a2c-f973-4ba2-b8bd-260e5b35a047] EVENT_ID:
> USER_FAILED_RUN_VM(54), Failed to run VM XXX due to a failed validation:
> [Cannot run VM. There is no host that satisfies current scheduling
> constraints. See below for details:, The host XXX did not satisfy internal
> filter HugePages because there are not enough free huge pages to run the
> VM.]
>
> The huge page fs is mounted:
>
> $ findmnt
> | |-/dev/hugepages1G hugetlbfs
> hugetlbfs rw,relatime,pagesize=1G
> | `-/dev/hugepages hugetlbfs
> hugetlbfs rw,relatime
>
> What am I missing ?
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/users(a)ovirt.org/message/VTYKTSSAXQQLS5HO5KOQSBDIHPTAHTOR/
>
>
6 years
Re: ERROR running your engine inside of the hosted-engine VM and are not in "Global Maintenance" mode
by Simone Tiraboschi
On Tue, Feb 5, 2019 at 1:46 PM Martin Humaj <mhumaj(a)gmail.com> wrote:
> Hi the problem is that ovirt-engine is running on different vm like a
> virtual machine under the hosts
>
> !! Cluster is in GLOBAL MAINTENANCE mode !!
> I can set it on the host machine but not on the ovirt-engine vm
>
>
Sorry,
one thing more to mention: the check is performed against the latest
information recorded in the DB.
So if the engine is down, it's not going to update the DB and so it will
never update the global maintenance status field.
If you are sure that you are in global maintenance mode and you want to
skip at all the check you can execute:
engine-setup
--otopi-environment=OVESETUP_CONFIG/continueSetupOnHEVM=bool:True
>
>
>
>
> On Tue, Feb 5, 2019 at 1:28 PM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Tue, Feb 5, 2019 at 12:31 PM <mhumaj(a)gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We run ovirt upgrade to 4.3, after upgrade we wanted to run engine-setup
>>> but we do not know how to put this host which is simply another virtual
>>> machine with ovirt-engine. hosted-engine is running on hosts.
>>>
>>> During execution engine service will be stopped (OK, Cancel)
>>> [OK]:
>>> [ ERROR ] It seems that you are running your engine inside of the
>>> hosted-engine VM and are not in "Global Maintenance" mode.
>>> In that case you should put the system into the "Global
>>> Maintenance" mode before running engine-setup, or the hosted-engine HA
>>> agent might kill the machine, which might corrupt your data.
>>>
>>> [ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine
>>> setup detected, but Global Maintenance is not set.
>>> [ INFO ] Stage: Clean up
>>> Log file is located at
>>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20190205121802-l7llrw.log
>>> [ INFO ] Generating answer file
>>> '/var/lib/ovirt-engine/setup/answers/20190205121855-setup.conf'
>>> [ INFO ] Stage: Pre-termination
>>> [ INFO ] Stage: Termination
>>> [ ERROR ] Execution of setup failed
>>>
>>> from the hosted nodes
>>>
>>> --== Host 2 status ==--
>>>
>>> Host ID : 2
>>> Engine status : {"reason": "vm not running on this
>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>> Can anyone please tell me how to put the global maintenance on virtual
>>> machine where the ovirt-engine is? not the hosts even if I put them on the
>>> global maintenance I am unable to run engine-setup on vm with ovirt-enging.
>>>
>>
>> Run this one one of your hosts:
>> hosted-engine --set-maintenance --mode=global
>> or from the webadmin UI, as you prefer.
>>
>>
>>>
>>> thanks
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YJQVUDKVC2...
>>>
>>
6 years
4.3 rc2: info about shutdown of last host and networking page
by Gianluca Cecchi
Hello,
env with single host HCI deployment from node ng iso
After powering off VMs, put into global maintenance, shutdown hosted
engine, I then run shutdown on host. This I think would simulate also a
total shutdown scenario when you have the final step of shutdown of the
final host.
This is what I see in console about failure in several unmounts, both /var
and /var/log and also gluster related filesystems
Could there be any dependency problem?
See here the screenshot:
https://drive.google.com/file/d/1wqqXlmT66hHLJ8DMcwfPvDXJHkWm_ijE/view?us...
Also I have not understood the networking page of host in Cockpit:
https://drive.google.com/file/d/1isX4F8qPcFmTyhmCVnZadhp6NY0R9Sy7/view?us...
in my scenario I was downloading a 1Gb image for the CentOS Atomic host
from public glance repo.
Questions:
- how to explain the receiving box graph? What are the different color
lines for?
- In "receiving" column inside the "interfaces" sections I see all "0" for
gluster one (eth1) and empty (no value) for ovirtmgmt (perhaps because it
is unmanaged from host point of view?)
Also, it seems to me that the temporary bridge for hosted engine deployment
on 192.168.124.0 (virbr0) is still there even after finishing the
deployment, even if with "no carrier"... is this expected?
Better perhaps to undefine the network after deployment?
Gianluca
6 years
Agentless Backup update: Trilio Vault...what a joke, pay $7500
by femi adegoke
Back in June of 2018, I posted regarding Agentless backup solutions.
Greg Sheremeta replied & mentioned Trilio.
The rest of the story...well just read on...
These guys at Trilio have turned out to be a complete waste of time. (I could use some more colorful language)
After my post & reply from Greg Sheremeta, I followed up with Trilio.
A lot of promises/dates were made (by Trilio) as to when a beta would be made available for testing.
Needless to say, they never came through.
Finally in Dec 2018, they said we are ready for the demo, we will show you a working product.
The day came, Trilio said, sorry it's too close to Xmas, we will postpone the demo's till 2019.
In 2019, I continued to follow up & finally they set a date for another demo - Jan 29.
On Jan. 29, we get on the Webex, they said, sorry the demo just broke 10 mins prior to our call, so no demo.
They show me some screenshots & their OpenStack version & again promise to get me beta software in a few days.
I continue to follow up (via email)
Yesterday (Feb 6), I get an email from Thomas Lahive GM; Sales and Alliance Partners....(copied & pasted below):
"We started RHV betas and decided to prioritize current Trilio customers (those that purchased Triliovault for Openstack).
If you would like to be part of the beta now then We can sign you up as a certified Trilio reseller which has a $7,500 Starter Fee. The $7,500 will be credited against your first customer order that is at least $7,500 so it will eventually cost you nothing. Many of our partners can apply the fee against revenue so it's a great tax incentive, but you can confirm with your finance department.
Please Lmk how you would like to proceed."
Please remember, I have never seen a working demo of this product, never.
Is this typical behavior of RH partners?
6 years
[Users] Moving iSCSI Master Data
by rni@chef.net
--========GMXBoundary282021374122634158505
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Hi,
it's me again....
I started my oVirt 'project' as a proof of concept,.. but it happend as always, it became production
Now, I've to move the iSCSI Master data to the real iSCSI traget.
Is there any way to do this, and to become rid of the old Master Data?
Thank you for your help
Hans-Joachim
--========GMXBoundary282021374122634158505
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
<span style=3D'font-family:Verdana'><span style=3D'font-size:12px'>Hi,<br /=
><br />it's me again....<br /><br />I started my oVirt 'project' as a proof=
of concept,.. but it happend as always, it became production <img alt=
=3D" " title=3D" " src=3D"http://images.gmx.com/images/outsource/applicatio=
n/mailclient/mailcom/resource/mailclient/icons/blue/emoticons/animated/S_02=
-516742918.gif" /><br /><br />Now, I've to move the iSCSI Master data to th=
e real iSCSI traget.<br />Is there any way to do this, and to become rid of=
the old Master Data?<br /><br /><span id=3D"editor_signature">Thank you fo=
r your help</span><br /><br />Hans-Joachim</span></span>
--========GMXBoundary282021374122634158505--
6 years
remote-viewer can not display console
by pxb@zj.sgcc.com.cn
I installed and configured ovirt 4.2.8 on centos 7.6,hosts and vms runs normal.
However, the virtual machine console often does not display properly. When I open the console, only "connected to graphic server" is displayed.
Is it ovirt 's bug or kvm's bug?
6 years
oVirt 4.3.2 Disk extended broken (UI)
by Strahil Nikolov
Hi,
I have just extended the disk of one of my openSuSE VMs and I have noticed that despite the disk is only 140GiB (in UI), the VM sees it as 180GiB.
I think that this should not happen at all.
[root@ovirt1 ee8b1dce-c498-47ef-907f-8f1a6fe4e9be]# qemu-img info c525f67d-92ac-4f36-a0ef-f8db501102faimage: c525f67d-92ac-4f36-a0ef-f8db501102fafile format: rawvirtual size: 180G (193273528320 bytes)disk size: 71G
Attaching some UI screen shots.
Note: I have extended the disk via the UI by selecting 40GB (old value in UI -> 100GB).
Best Regards,Strahil Nikolov
6 years
errors upgrading ovirt 4.3.1 to 4.3.2
by Jason Keltz
Hi.
I have a few issues after a recent upgrade from 4.3.1 to 4.3.2:
1) Power management is no longer working. I'm using Dell drac7. This
has always worked previously. When I click on the "Test" button, I get:
"Testing in progress. It will take a few seconds. Please wait" but then
it just sits there and never returns.
2) After rekickstarting one of my hosts, when I click on it, and choose
"Host Console", I get "Authentication failed: invalid-hostkey". If I
click "Try again", I'm taken to a page with "404 - Page not found Click
here to continue". The page not found is likely a bug. Now, if I visit
cockpit directly on the host via its own URL, it works just fine. Given
that I deleted the host and re-added to engine, it's really not clear to
me how to tell engine to refresh. I figured after rekickstarting the
host, the problem would surely go away, but it did not.
3) From time to time, I am seeing the following error appear in engine:
"Uncaught exception occurred. Please try reloading the page. Details:
(TypeError): oab (...) is null Please have your administrator check the
UI logs". Another bug ...
Engine is standalone engine, not hosted.
Jason.
6 years
I can't create disk with ovirt 4.3.2
by siovelrm@gmail.com
Hello, I just installed ovirt 4.3.2, in Self-Hosted mode, all the same as in previous versions. It happens that when I want to create a disk with a user that is not the admin I get the following error.
"Error while executing action Add Disk to VM: Internal Engine Error"
This happens to me with all the other users except the admin even when those users are also Super User. Please I need your help.
Engine logs say the following
2019-04-05 15: 34: 09,977-04 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Running command: AddDiskCommand internal: false. Entities affected: ID: c76a5059-f891-496a-b45f-7ba7ea878ceb Type: StorageAction group CREATE_DISK with role type USER
2019-04-05 15: 34: 10,002-04 WARN [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Validation of action 'AddImageFromScratch' failed for user jdoe @ internal-authz. Reasons: VAR__TYPE__STORAGE__DOMAIN, NON_ADMIN_USER_NOT_AUTHORIZED_TO_PERFORM_ACTION_ON_HE
2019-04-05 15: 34: 10,070-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-13) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] EVENT_ID: USER_FAILED_ADD_DISK (2,023), Add-Disk operation failed (User: jdoe @ internal-authz).
2019-04-05 15: 34: 10,432-04 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-8) [37026e0a-92e6-4bfa-9f0f-f052d9eced2d] Command ' AddDisk 'id:' 8b10a2b8-3a38-45a4-9c08-7e742eca001b 'child commands' [bcfaa199-dee6-4ae4-9404-9b75cd8e9339]' executions were completed, status' FAILED '
2019-04-05 15: 34: 11,461-04 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [37026e0a-92e6-4bfa-9f0f- f052d9eced2d] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
2019-04-05 15: 34: 11,471-04 ERROR [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [37026e0a-92e6-4bfa- 9f0f-f052d9eced2d] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' with failure.
2019-04-05 15: 34: 11,493-04 WARN [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] VM is null - not unlocking
2019-04-05 15: 34: 11,523-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-59) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE (2,022), Add-Disk operation failed to complete.
6 years
All hosts non-operational after upgrading from 4.2 to 4.3
by John Florian
I am in a severe pinch here. A while back I upgraded from 4.2.8 to 4.3.3
and only had one step remaining and that was to set the cluster compat
level to 4.3 (from 4.2). When I tried this it gave the usual warning that
each VM would have to be rebooted to complete, but then I got my first
unusual piece when it then told me next that this could not be completed
until each host was in maintenance mode. Quirky I thought, but I stopped
all VMs and put both hosts into maintenance mode. I then set the cluster
to 4.3. Things didn't want to become active again and I eventually noticed
that I was being told the DC needed to be 4.3 as well. Don't remember that
from before, but oh well that was easy.
However, the DC and SD remains down. The hosts are non-op. I've powered
everything off and started fresh but still wind up in the same state.
Hosts will look like their active for a bit (green triangle) but then go
non-op after about a minute. It appears that my iSCSI sessions are
active/logged in. The one glaring thing I see in the logs is this in
vdsm.log:
2019-04-05 12:03:30,225-0400 ERROR (monitor/07bb1bf) [storage.Monitor]
Setting up monitor for 07bb1bf8-3b3e-4dc0-bc43-375b09e06683 failed
(monitor:329)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
326, in _setupLoop
self._setupMonitor()
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
348, in _setupMonitor
self._produceDomain()
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 158, in
wrapper
value = meth(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/monitor.py", line
366, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in
produce
domain.getRealDomain()
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in
getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in
_realProduce
domain = self._findDomain(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in
_findDomain
return findMethod(sdUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 176, in
_findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'07bb1bf8-3b3e-4dc0-bc43-375b09e06683',)
How do I proceed to get back operational?
6 years
Re: Hosted-Engine constantly dies
by Strahil
Hi Simone,
> According to gluster administration guide:
> https://docs.gluster.org/en/latest/Administrator%20Guide/Network%20Config...
>
> in the "when to bond" section we can read:
> network throughput limit of client/server \<\< storage throughput limit
>
> 1 GbE (almost always)
> 10-Gbps links or faster -- for writes, replication doubles the load on the network and replicas are usually on different peers to which the client can transmit in parallel.
>
> So if you are using oVirt hyper-converged in replica 3 you have to transmit everything two times over the storage network to sync it with other peers.
>
> I'm not really in that details, but if https://bugzilla.redhat.com/1673058 is really like it's described, we even have an 5x overhead with current gluster 5.x.
>
> This means that with a 1000 Mbps nic we cannot expect more than:
> 1000 Mbps / 2 (other replicas) / 5 (overhead in Gluster 5.x ???) / 8 (bit per bytes) = 12.5 MByte per seconds and this is definitively enough to have sanlock failing especially because we don't have just the sanlock load as you can imagine.
>
> I'd strongly advice to move to 10 Gigabit Ethernet (nowadays with a few hundred dollars you can buy a 4/5 ports 10GBASE-T copper switch plus 3 nics and the cables just for the gluster network) or to bond a few 1 Gigabit Ethernet links.
I didn't know that.
So , with 1 Gbit network everyone should use replica 3 arbiter 1 volumes to minimize replication traffic.
Best Regards,
Strahil Nikolov
6 years
VM bandwidth limitations
by John Florian
In my oVirt deployment at home, I'm trying to minimize the amount of
physical HW and its 24/7 power draw. As such I have the NFS server for
my domain virtualized. This is not used for oVirt's SD, but rather the
NFS server's back-end storage comes from oVirt's SD. To maximize the
performance of my NFS server, do I still need to use bonded NICs to
increase bandwidth like I would a physical server or does the
VirtIO-SCSI stuff magically make this unnecessary? In my head I can
argue it both ways, but have never seen it stated one way or the other,
oddly.
--
John Florian
6 years
Hosted-Engine constantly dies
by Strahil Nikolov
Hi Guys,
As I'm still quite new in oVirt - I have some problems finding the problem on this one.My Hosted Engine (4.3.2) is constantly dieing (even when the Global Maintenance is enabled).My interpretation of the logs indicates some lease problem , but I don't get the whole picture ,yet.
I'm attaching the output of 'journalctl -f | grep -Ev "Started Session|session opened|session closed"' after I have tried to power on the hosted engine (hosted-engine --vm-start).
The nodes are fully updated and I don't see anything in the gluster v5.5 logs, but I can double check.
Any hints are appreciated and thanks in advance.
Best Regards,Strahil Nikolov
6 years
oVirt 4.3.2 Importing VMs from a detached domain not keeping cluster info
by Strahil Nikolov
Hello,
can someone tell me if this is an expected behaviour:
1. I have created a data storage domain exported by nfs-ganesha via NFS2. Stop all VMs on the Storage domain
3. Set to maintenance and detached (without wipe) the storage domain3.2 All VMs are gone (which was expected)4. Imported the existing data domain via Gluster5. Wen to the Gluster domain and imported all templates and VMs5.2 Power on some of the VMs but some of them failed
The reason for failure is that some of the re-imported VMs were automatically assigned to the Default Cluster , while they belonged to another one.
Most probably this is not a supported activity, but can someone clarify it ?
Thanks in advance.
Best Regards,Strahil Nikolov
6 years
Upgrade 3.5 to 4.3
by Demeter Tibor
Hi All,
I did began a very big project, I've started upgrade a cluster from 3.5 to 4.3...
It was a mistake:(
Since upgrade, I can't start the host. The UI seems to working fine.
[root@virt ~]# service vdsm-network start
Redirecting to /bin/systemctl start vdsm-network.service
Job for vdsm-network.service failed because the control process exited with error code. See "systemctl status vdsm-network.service" and "journalctl -xe" for details.
[root@virt ~]# service vdsm-network status
Redirecting to /bin/systemctl status vdsm-network.service
● vdsm-network.service - Virtual Desktop Server Manager network restoration
Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-04-04 15:03:44 CEST; 6s ago
Process: 19325 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited, status=1/FAILURE)
Process: 19313 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append --logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, status=0/SUCCESS)
Main PID: 19325 (code=exited, status=1/FAILURE)
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: return tool_command[cmd]["command"](*args)
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: File "/usr/lib/python2.7/site-packages/vdsm/tool/restore_nets.py", line 41, in restore_command
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: exec_restore(cmd)
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: File "/usr/lib/python2.7/site-packages/vdsm/tool/restore_nets.py", line 54, in exec_restore
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: raise EnvironmentError('Failed to restore the persisted networks')
Apr 04 15:03:44 virt.bolax.hu vdsm-tool[19325]: EnvironmentError: Failed to restore the persisted networks
Apr 04 15:03:44 virt.bolax.hu systemd[1]: vdsm-network.service: main process exited, code=exited, status=1/FAILURE
Apr 04 15:03:44 virt.bolax.hu systemd[1]: Failed to start Virtual Desktop Server Manager network restoration.
Apr 04 15:03:44 virt.bolax.hu systemd[1]: Unit vdsm-network.service entered failed state.
Apr 04 15:03:44 virt.bolax.hu systemd[1]: vdsm-network.service failed.
Since, the host does not start.
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.764+0000: 17705: error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.965+0000: 17709: error : virNetSASLSessionListMechanisms:393 : internal error: cannot list SASL ...line 1757)
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.966+0000: 17709: error : remoteDispatchAuthSaslInit:3440 : authentication failed: authentication failed
Apr 04 14:56:00 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:00.966+0000: 17705: error : virNetSocketReadWire:1806 : End of file while reading data: Input/output error
Apr 04 14:56:01 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:01.167+0000: 17710: error : virNetSASLSessionListMechanisms:393 : internal error: cannot list SASL ...line 1757)
Apr 04 14:56:01 virt.bolax.hu libvirtd[17705]: 2019-04-04 12:56:01.167+0000: 17710: error : remoteDispatchAuthSaslInit:3440 : authentication failed: authentication failed
[root@virt ~]# service vdsm-network status
Redirecting to /bin/systemctl status vdsm-network.service
● vdsm-network.service - Virtual Desktop Server Manager network restoration
Loaded: loaded (/usr/lib/systemd/system/vdsm-network.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-04-04 15:00:39 CEST; 2min 49s ago
Process: 19079 ExecStart=/usr/bin/vdsm-tool restore-nets (code=exited, status=1/FAILURE)
Process: 19045 ExecStartPre=/usr/bin/vdsm-tool --vvverbose --append --logfile=/var/log/vdsm/upgrade.log upgrade-unified-persistence (code=exited, status=0/SUCCESS)
Main PID: 19079 (code=exited, status=1/FAILURE)
Also, I upgraded to until 4.0 , but it seems to same. I cant reinstall, and reactivate the host.
Originally it was an AIO installation.
Please help me solve this problem.
Thanks in advance,
R
Tibor
6 years
NPE for GetValidHostsForVmsQuery
by nicolas@devels.es
Hi,
We're running oVirt 4.3.2. When we click on the "Migrate" button over a
VM, an error popup shows up and in the ovirt-engine log we see:
2019-04-03 12:37:40,897+01 ERROR
[org.ovirt.engine.core.bll.GetValidHostsForVmsQuery] (default task-6)
[478381f0-18e3-4c96-bcb5-aafd116d7b7a] Query 'GetValidHostsForVmsQuery'
failed: null
I'm attaching the full NPE.
Could someone point out what could be the reason for the NPE?
Thanks.
6 years
Ansible hosted-engine deploy still doesnt support manually defined ovirtmgmt?
by Callum Smith
Dear All,
We're trying to deploy our hosted engine remotely using the ansible hosted engine playbook, which has been a rocky road but we're now at the point where it's installing, and failing. We've got a pre-defined bond/VLAN setup for our interface which has the correct bond0 bond0.123 and ovirtmgmt bridge on top but we're hitting the classic error:
Failed to find a valid in
terface for the management network of host virthyp04.virt.in.bmrc.ox.ac.uk. If the interface ovirtmgmt is a bridge, it should be torn-down manually.
Does this bug still exist in the latest (4.3) version, and is installing using ansible with this network configuration impossible?
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years
Fwd: Re: VDI broker and oVirt
by Jorick Astrego
Forward to the list.
-------- Forwarded Message --------
Subject: Re: [ovirt-users] Re: VDI broker and oVirt
Date: Fri, 05 Apr 2019 04:52:13 -0400
From: alex(a)triadic.us
As far as official software, the best you'll find is the user portal.
There is also this...
https://github.com/nkovacne/ovirt-desktop-client
We used that as a code base for our own vdi connector using smart card
pksc12 certs to auth to the ovirt API.
On Apr 5, 2019 4:08 AM, Jorick Astrego <jorick(a)netbulae.eu> wrote:
Hi,
I think you mean to ask about the connection broker to connect to
your VDI infrastructure?
Something like this:
Or
https://www.leostream.com/solution/remote-access-for-virtual-and-physical...
Ovirt has the VM user portal https://github.com/oVirt/ovirt-web-ui ,
but I have never used a third party connection broker myself so I'm
not aware of any compatible with oVirt or RHEV...
On 4/4/19 9:10 PM, oquerejazu(a)gmail.com
<mailto:oquerejazu@gmail.com> wrote:
I have Ovirt installed, with two hypervisor and Ovirt engine. I Want To mount a VDI infrastructure, as cheap as possible, but robust and reliable. The question is what broker I can use. Thank you.
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GLH7Y5HCOMB...
Met vriendelijke groet, With kind regards,
Jorick Astrego
*
Netbulae Virtualization Experts *
------------------------------------------------------------------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW
NL821234584B01
------------------------------------------------------------------------
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
6 years
Re: Controller recomandation - LSI2008/9265
by Strahil
Adding Gluster users' mail list.On Apr 5, 2019 06:02, Leo David <leoalex(a)gmail.com> wrote:
>
> Hi Everyone,
> Any thoughts on this ?
>
>
> On Wed, Apr 3, 2019, 17:02 Leo David <leoalex(a)gmail.com> wrote:
>>
>> Hi Everyone,
>> For a hyperconverged setup started with 3 nodes and going up in time up to 12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
>> Perc h710 ( raid ) might be an option too, but on a different chassis.
>> There will not be many disk installed on each node, so the replication will be replica 3 replicated-distribute volumes across the nodes as:
>> node1/disk1 node2/disk1 node3/disk1
>> node1/disk2 node2/disk2 node3/disk2
>> and so on...
>> As i will add nodes to the cluster , I intend expand the volumes using the same rule.
>> What would it be a better way, to used jbod cards ( no cache ) or raid card and create raid0 arrays ( one for each disk ) and therefore have a bit of raid cache ( 512Mb ) ?
>> Is raid caching a benefit to have it underneath ovirt/gluster as long as I go for "Jbod" installation anyway ?
>> Thank you very much !
>> --
>> Best regards, Leo David
6 years
VDI broker and oVirt
by oquerejazu@gmail.com
hello
What is the name of the broker that we can install in ovirt?
any documentation?
thanks !!
6 years
Controller recomandation - LSI2008/9265
by Leo David
Hi Everyone,
For a hyperconverged setup started with 3 nodes and going up in time up to
12 nodes, I have to choose between LSI2008 ( jbod ) and LSI9265 (raid).
Perc h710 ( raid ) might be an option too, but on a different chassis.
There will not be many disk installed on each node, so the replication will
be replica 3 replicated-distribute volumes across the nodes as:
node1/disk1 node2/disk1 node3/disk1
node1/disk2 node2/disk2 node3/disk2
and so on...
As i will add nodes to the cluster , I intend expand the volumes using the
same rule.
What would it be a better way, to used jbod cards ( no cache ) or raid
card and create raid0 arrays ( one for each disk ) and therefore have a bit
of raid cache ( 512Mb ) ?
Is raid caching a benefit to have it underneath ovirt/gluster as long as I
go for "Jbod" installation anyway ?
Thank you very much !
--
Best regards, Leo David
6 years
Broker ovirt
by Oscar Querejazu
Hello
I'm looking for what broker I can use, that's the question, to ovirt and
thus be able to deploy desktops based on templates
6 years
status of uploaded images complete and not OK
by Gianluca Cecchi
Hello,
I'm using oVirt 4.3.2 and I have an iSCSI based storage domain.
I see that if I upload an iso image or a qcow2 file, at the end it remains
in "Complete" status and not "OK"
See here:
https://drive.google.com/file/d/1rTuVB1_MGxudVCx-ok7mE2BirWF0rx7x/view?us...
I can use them, but it seems a bit strange and in fact I had a problem in
putting a host into maintenance due to this. See here:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UJ75YMYSPW...
In fact from db point of view I can see this (for other 2 more uploads done
in March as described above), where status remains in phase 9....
engine=# select last_updated, message, bytes_sent, bytes_total,active,phase
from image_transfers ;
last_updated | message | bytes_sent |
bytes_total | active | phase
----------------------------+-----------------------+-------------+-------------+--------+-------
2019-03-14 16:22:11.728+01 | Finalizing success... | 524288000 |
565182464 | t | 9
2019-03-14 16:26:00.03+01 | Finalizing success... | 6585057280 |
6963593216 | t | 9
2019-04-04 17:16:29.889+02 | Finalizing success... | 21478375424 |
21478375424 | f | 9
2019-04-04 19:35:28.688+02 | Finalizing success... | 4542431232 |
4588568576 | t | 9
(4 rows)
engine=#
Any hint? anyone to verify an upload on block storage on 4.3.2?
Thanks,
Gianluca
6 years
Eve-ng KVM acceleration not working in Ovirt
by robertodg@prismatelecomtesting.com
Hello everyone!
Actually I'm encountering a huge problem that I'll explain in steps, in order to be clear and discursive. First of all, I've deployed eve-ng Community Edition 2.0.3-92 on a KVM with those specs:
Server DELL PowerEdge R420
CPU: Intel(R) Xeon(R) CPU E5-2420 0 @ 1.90GHz (24 Core total)
RAM: 24GB RAM
HDD: 4x 2TB
All was working fine, KVM virtualization was working well and i was able to deploy qemu images like PaloAlto (PANOS-8.0.5) with qemu-options accell=kvm without any problem.
After an internal company upgrade, i've decided to migrate this tool on Ovirt 4.2, in a node with following specs:
SuperMicro Server
CPU: Intel (R) Xeon (R) Platinum 8160T @ 2.10GHz (96 Core Total)
RAM: 256 GB (16x16) DDR4 2666Mhz Samsung M393A2K40BB2-CTD
HDD: x 2 Seagate ST500LM021-1KJ152 SATA 3.0 500GB
SSD: x 2 SSD Samsung SSD 850 PRO 1TB
NVMe: x 4 KXG50ZNV512G TOSHIBA 512GB (RAID0)
Before migration eve-ng had 12 cores, 12 GB RAM and 90 GB HDD. After migration he has now 16 Core, 32GB RAM and RAID0 100GB.
On Ovirt Pass-Through Host CPU is activated on VM, CPU nesting is activated too and after new release (2.0.5-95) those are eve-info command logs:
eve-ng-info-ovirt: https://paste.fedoraproject.org/paste/Mom-~CXmnlU3cWCUdxPXFg
Those are the previous eve-info command logs (on KVM):
eve-ng-info-kvm: https://paste.fedoraproject.org/paste/AhqiapiyG5lJcHQBlrbJgw
Also, here you can find output of cpuinfo of both node (KVM and Ovirt):
cpuinfo-kvm: https://paste.fedoraproject.org/paste/6TQDHxCmiEMosF1qtcdSDQ
cpuinfo-ovirt: https://paste.fedoraproject.org/paste/LmieowOvc7WwkEWW9JMduQ
On both nodes, output of cat /sys/module/kvm_intel/parameters/nested is Y
at least, this is the output of virsh dumpxml of both nodes:
eve-ng-ovirt.xml: https://paste.fedoraproject.org/paste/s5SbdWhBHTWJs9O2Be~y8w
eve-ng-kvm.xml: https://paste.fedoraproject.org/paste/Yz1E9taAkNT-JyTj8cBR-g
However, after migration, i'm not able to deploy qemu images with accell=kvm enabled anymore. With same example PANOS-8.0.5 i'm able to run this qemu image only with this setting removed.
Thinking on a migration problem, i've deployed a new eve-ng VM with same specs but problem is still present.
Have you any idea on what might be the problem?
Thanks for your support.
6 years
used network for uploading api
by Nathanaël Blanchet
Hi all,
I noticed a bad transfer when uploading a floating disk through the
upload API.
I set up two networks, one for management with a dedicated 1Gb link and
an other 10Gi for other vlans.
It seems that the upload is done by the management network, can anyone
confirm it?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
6 years
UI bug viewing/editing host
by Callum Smith
2019-04-04 10:43:35,383Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Permutation name: 0D2DB7A91B469CC36C64386E5632FAC5
2019-04-04 10:43:35,383Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: (TypeError) : oab(...) is null
at org.ovirt.engine.ui.webadmin.section.main.view.popup.host.HostPopupView.$lambda$0(HostPopupView.java:693)
at org.ovirt.engine.ui.webadmin.section.main.view.popup.host.HostPopupView$lambda$0$Type.eventRaised(HostPopupView.java:693)
at org.ovirt.engine.ui.uicompat.Event.$raise(Event.java:99)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.$setSelectedItem(ListModel.java:82)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.setSelectedItem(ListModel.java:78)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.itemsChanged(ListModel.java:236)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.$itemsChanged(ListModel.java:224)
at org.ovirt.engine.ui.uicommonweb.models.ListModel.$setItems(ListModel.java:102)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel.$updateClusterList(HostModel.java:1037)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel.$lambda$13(HostModel.java:1017)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostModel$lambda$13$Type.onSuccess(HostModel.java:1017)
at org.ovirt.engine.ui.frontend.Frontend$1.$onSuccess(Frontend.java:227) [frontend.jar:]
at org.ovirt.engine.ui.frontend.Frontend$1.onSuccess(Frontend.java:227) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.$onSuccess(OperationProcessor.java:133) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.OperationProcessor$1.onSuccess(OperationProcessor.java:133) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:270) [frontend.jar:]
at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:270) [frontend.jar:]
at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:]
at com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:233) [gwt-servlet.jar:]
at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409) [gwt-servlet.jar:]
at Unknown.onreadystatechange<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#host...)
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) [gwt-servlet.jar:]
at Unknown.Su/<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#host...)
at Unknown.anonymous(Unknown)
2019-04-04 10:43:40,636Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Permutation name: 0D2DB7A91B469CC36C64386E5632FAC5
2019-04-04 10:43:40,636Z ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-15) [] Uncaught exception: com.google.gwt.event.shared.UmbrellaException: Exception caught: (TypeError) : oab(...) is null
at java.lang.Throwable.Throwable(Throwable.java:70) [rt.jar:1.8.0_201]
at java.lang.RuntimeException.RuntimeException(RuntimeException.java:32) [rt.jar:1.8.0_201]
at com.google.web.bindery.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:64) [gwt-servlet.jar:]
at com.google.gwt.event.shared.UmbrellaException.UmbrellaException(UmbrellaException.java:25) [gwt-servlet.jar:]
at com.google.gwt.event.shared.HandlerManager.$fireEvent(HandlerManager.java:117) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.$fireEvent(Widget.java:127) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.fireEvent(Widget.java:127) [gwt-servlet.jar:]
at com.google.gwt.event.dom.client.DomEvent.fireNativeEvent(DomEvent.java:110) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.$onBrowserEvent(Widget.java:163) [gwt-servlet.jar:]
at com.google.gwt.user.client.ui.Widget.onBrowserEvent(Widget.java:163) [gwt-servlet.jar:]
at com.google.gwt.user.client.DOM.dispatchEvent(DOM.java:1415) [gwt-servlet.jar:]
at com.google.gwt.user.client.impl.DOMImplStandard.dispatchEvent(DOMImplStandard.java:312) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:236) [gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:275) [gwt-servlet.jar:]
at Unknown.Su/<(https://he.virt.in.bmrc.ox.ac.uk/ovirt-engine/webadmin/?locale=en_US#host...)
at Unknown.anonymous(Unknown)
Caused by: com.google.gwt.core.client.JavaScriptException: (TypeError) : oab(...) is null
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel.$onSave(HostListModel.java:816)
at org.ovirt.engine.ui.uicommonweb.models.hosts.HostListModel.executeCommand(HostListModel.java:1969)
at org.ovirt.engine.ui.uicommonweb.UICommand.$execute(UICommand.java:163)
at org.ovirt.engine.ui.common.presenter.AbstractModelBoundPopupPresenterWidget.$lambda$4(AbstractModelBoundPopupPresenterWidget.java:306)
at org.ovirt.engine.ui.common.presenter.AbstractModelBoundPopupPresenterWidget$lambda$4$Type.onClick(AbstractModelBoundPopupPresenterWidget.java:306)
at com.google.gwt.event.dom.client.ClickEvent.dispatch(ClickEvent.java:55) [gwt-servlet.jar:]
at com.google.gwt.event.shared.GwtEvent.dispatch(GwtEvent.java:76) [gwt-servlet.jar:]
at com.google.web.bindery.event.shared.SimpleEventBus.$doFire(SimpleEventBus.java:173) [gwt-servlet.jar:]
... 12 more
Clean install ovirt-node-ng-4.3.0-0.20190204.0+1
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years
[ANN] oVirt 4.3.3 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.3 Second Release Candidate, as of April 4th, 2019.
This update is a release candidate of the third in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.3 release highlights:
http://www.ovirt.org/release/4.3.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.3/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years
Re: [Gluster-users] Gluster 5.5 slower than 3.12.15
by Strahil
Hi Amar,
I would like to test Cluster v6 , but as I'm quite new to oVirt - I'm not sure if oVirt <-> Gluster will communicate properly
Did anyone test rollback from v6 to v5.5 ? If rollback is possible - I would be happy to give it a try.
Best Regards,
Strahil NikolovOn Apr 3, 2019 11:35, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
>
> Strahil,
>
> With some basic testing, we are noticing the similar behavior too.
>
> One of the issue we identified was increased n/w usage in 5.x series (being addressed by https://review.gluster.org/#/c/glusterfs/+/22404/), and there are few other features which write extended attributes which caused some delay.
>
> We are in the process of publishing some numbers with release-3.12.x, release-5 and release-6 comparison soon. With some numbers we are already seeing release-6 currently is giving really good performance in many configurations, specially for 1x3 replicate volume type.
>
> While we continue to identify and fix issues in 5.x series, one of the request is to validate release-6.x (6.0 or 6.1 which would happen on April 10th), so you can see the difference in your workload.
>
> Regards,
> Amar
>
>
>
> On Wed, Apr 3, 2019 at 5:57 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hi Community,
>>
>> I have the feeling that with gluster v5.5 I have poorer performance than it used to be on 3.12.15. Did you observe something like that?
>>
>> I have a 3 node Hyperconverged Cluster (ovirt + glusterfs with replica 3 arbiter1 volumes) with NFS Ganesha and since I have upgraded to v5 - the issues came up.
>> First it was 5.3 notorious experience and now with 5.5 - my sanlock is having problems and higher latency than it used to be. I have switched from NFS-Ganesha to pure FUSE , but the latency problems do not go away.
>>
>> Of course , this is partially due to the consumer hardware, but as the hardware has not changed I was hoping that the performance will remain as is.
>>
>> So, do you expect 5.5 to perform less than 3.12 ?
>>
>> Some info:
>> Volume Name: engine
>> Type: Replicate
>> Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1:/gluster_bricks/engine/engine
>> Brick2: ovirt2:/gluster_bricks/engine/engine
>> Brick3: ovirt3:/gluster_bricks/engine/engine (arbiter)
>> Options Reconfigured:
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>> cluster.enable-shared-storage: enable
>>
>> Network: 1 gbit/s
>>
>> Filesystem:XFS
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users(a)gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
6 years
Engine can not support CORS by openidconnect
by du_hongyu@yeah.net
hi,
I write a openIDconnect client to authentication my web app from ovirt-engine-4.2.0,
this is my engine config :
[root@engine ~]# engine-config -a |grep CORS
CORSSupport: true version: general
CORSAllowedOrigins: https://10.110.128.129 version: general
CORSAllowDefaultOrigins: true version: general
CORSDefaultOriginSuffixes: :9090 version: general
but face some error:
Access to fetch at 'https://10.110.128.120/ovirt-engine/sso/openid/authorize?response_type=co...' (redirected from 'https://10.110.128.129/console/auth/login') from origin 'https://10.110.128.129' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request.
engine log(I add log info ) is:
019-04-04 10:16:52,491+08 INFO [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-4) [] engine=======strippedScope [openid, ovirt-app-portal, ovirt-app-admin, ovirt-app-api, ovirt-ext=token:password-access, ovirt-ext=auth:sequence-priority, ovirt-ext=token:login-on-behalf, ovirt-ext=token-info:authz-search, ovirt-ext=token-info:public-authz-search, ovirt-ext=token-info:validate, ovirt-ext=revoke:revoke-all]
2019-04-04 10:16:52,492+08 INFO [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-4) [] engine=======requestedScope [openid, ovirt-app-admin, ovirt-app-api, ovirt-app-portal, ovirt-ext=auth:sequence-priority, ovirt-ext=revoke:revoke-all, ovirt-ext=token-info:authz-search, ovirt-ext=token-info:public-authz-search, ovirt-ext=token-info:validate, ovirt-ext=token:login-on-behalf, ovirt-ext=token:password-access]
and I realize this java source not run, i do not find any log in engine.log
backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/servlet/CORSSupportFilter.java
Regards
Hongyu Du
6 years
Re: Vm status not update after update
by Strahil
Sadly I couldn't find the e-mail with the fix.
As far as I remember , there should be a bug opened for that.
Best Regards,
Strahil NikolovOn Apr 3, 2019 13:53, Marcelo Leandro <marceloltmm(a)gmail.com> wrote:
>
> Hi,
>
> Strahil , can you help me?
>
> Very thanks,
>
> Marcelo Leandro
>
> Em ter, 2 de abr de 2019 10:02, Marcelo Leandro <marceloltmm(a)gmail.com> escreveu:
>>
>> Sorry, I can't find this.
>>
>> Em ter, 2 de abr de 2019 às 09:49, Strahil Nikolov <hunter86_bg(a)yahoo.com> escreveu:
>>>
>>> I think I already met a solution in the mail lists. Can you check and apply the fix mentioned there ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> В вторник, 2 април 2019 г., 14:39:10 ч. Гринуич+3, Marcelo Leandro <marceloltmm(a)gmail.com> написа:
>>>
>>>
>>> Hi, After update my hosts to ovirt node 4.3.2 with vdsm version vdsm-4.30.11-1.el7 my vms status not update, if I do anything with vm like shutdown, migrate this status not change , only a restart the vdsm the host that vm is runnig.
>>>
>>> vdmd status :
>>>
>>> ERROR Internal server error
>>> Traceback (most recent call last):
>>> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in _handle_request..
>>>
>>> Thanks,
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to
6 years
vdsClient in oVirt 4.3
by nicolas@devels.es
Hi,
In oVirt 4.1 we used this command to set a volume as LEGAL:
vdsClient -s <host> setVolumeLegality sdUUID spUUID imgUUID leafUUID
LEGAL
What would be the equivalent to this command using vdsm-client in oVirt
4.3?
Thanks.
6 years
Vm status not update after update
by Marcelo Leandro
Hi, After update my hosts to ovirt node 4.3.2 with vdsm version
vdsm-4.30.11-1.el7
my vms status not update, if I do anything with vm like shutdown, migrate
this status not change , only a restart the vdsm the host that vm is runnig.
vdmd status :
ERROR Internal server error
Traceback (most
recent call last):
File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request..
Thanks,
6 years
Actual size bigger than virtual size
by suporte@logicworks.pt
Hi,
I have an all in one ovirt 4.2.2 with gluster storage and a couple of windows 2012 VMs.
One w2012 is showing actual size 209GiB and Virtual Size 150 GiB on a thin provision disk. The vm shows 30,9 GB of used space.
This VM is slower than the others mainly when we reboot the machine it takes around 2 hours.
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
6 years
Why/how does oVirt fsync over NFS
by Karli Sjöberg
Hello!
This question arose after the (in my opinion) hasty response from a user
that thinks oVirt is bad because it cares about the integrity of your data:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QPVXUFW2U3W...
Fine, that's their choice. They can have silent data corruption all they
want in case of a power failure or something, it's up to them if they
think it's worth it.
However, I started wondering why oVirt sync's everything over NFS, I
know it does because you can in ZFS turn it off by "zfs set
sync=disabled <dataset>" and throughput and everything goes through the
roof. The right way is of course to add a SLOG or better yet a mirrored
pair but just for a quick test or whatever...
But their doesn't seem to be any reason why oVirt should do that, that's
why I'm asking. Look, this is from the CentOS 'mount' man page:
"defaults
Use default options: rw, suid, dev, exec, auto, nouser,
and async."
So therefore 'sync' should be explicitly set on the mount, right? Wrong!
# mount | grep -c sync
0
So why then does it sync? The NFS man page says:
"The sync mount option
The NFS client treats the sync mount option differently than some
other file systems (refer to mount(8) for a description of the generic
sync and async mount options). If neither sync nor async is specified
(or if the async option is specified), the NFS client delays sending
application writes to the server until any of these events occur:
Memory pressure forces reclamation of system memory resources.
An application flushes file data explicitly with sync(2),
msync(2), or fsync(3).
An application closes a file with close(2).
The file is locked/unlocked via fcntl(2)."
So which of these things are happening for the Host to send sync calls
to the NFS server?
Just curious...
/K
6 years
4.2 / 4.3 : Moving the hosted-engine to another storage
by andreas.elvers+ovirtforum@solutions.work
Hi friends of oVirt,
Roughly 3 years ago a user asked about the options he had to move the hosted engine to some other storage.
The answer by Simone Tiraboschi was that it would largely not be possible, because of references in the database to the node the engine was hosted. This information would prevent a successful move of the engine even with backup/restore.
The situation seem to have improved, but I'm not sure. So I ask.
We have to move our engine away from our older Cluster with NFS Storage backends (engine, volumes, iso-images).
The engine should be restored on our new cluster that has a gluster volume available for the engine. Additionally this 3-node cluster is running Guests from a Cinder/Ceph storage Domain.
I want to restore the engine on a different cluster to a different storage domain.
Reading the documentation at https://www.ovirt.org/documentation/self-hosted/chap-Backing_up_and_Resto... I am wondering whether oVirt Nodes (formerly Node-NG) are capable of restoring an engine at all. Do I need EL-based Nodes? We are currently running on oVirt Nodes.
- Andreas
6 years
Re: oVirt Performance (Horrific)
by Strahil
Hi Drew,
What is the host RAM size and what is the setting for VM.dirty_ratio and background ratio on those hosts?
What about your iSCSI target?
Best Regards,
Strahil Nikolov
On Mar 11, 2019 23:51, Drew Rash <drew.rash(a)gmail.com> wrote:
>
> Added the disable:false, removed the gluster, re-added using nfs. Performance still in the low 10's MBps + or - 5
> Ran the showmount -e "" and it displayed the mount.
>
> Trying right now to re-mount using gluster with a negative-timeout=1 option.
>
> We converted one of our 4 boxes to FreeNAS, took 4 6TB drives and made a raid iSCSI and connected it to oVirt. Boot windows. ( times 2, did 2 boxes with a 7GB file on each) copied from one to the other and it copied at 600MBps average. But then has weird pauses... I think it's doing some kind of cache..it'll go like 2GB and choke to zero Bps. Then speed up and choke, speed up choke averaging or getting up to 10MBps. Then at 99% it waits 15 seconds with 0 bytes left...
> Small files, are instant basically. No complaint there.
> So...WAY faster. But suffers from the same thing....just requires writing some more to get to it. a few gigs and then it crawls.
>
> Seems to be related to if I JUST finished running a test. If I wait a while, I get it it to copy almost 4GB or so before choking.
> I made a 3rd windows 10 VM and copied the same file from the 1st to the 2nd (via a windows share and from the 3rd box) And it didn't choke or do any funny business...oddly. Maybe a fluke. Only did that once.
>
> So....switching to freenas appears to have increased the window size before it runs horribly. But it will still run horrifically if the disk is busy.
>
> And since we're planning on doing actual work on this... idle disks caching up on some hidden cache feature of oVirt isn't gonna work. We won't be writing gigs of data all over the place...but knowing that this chokes a VM to near death...is scary.
>
> It looks like for a windows 10 install to operate correctly, it expects at least 15MB/s with less than 1s latency. Otherwise services don't start and weird stuff happens and it runs slower than my dog while pooping out that extra little stringy bit near the end. So we gotta avoid that.
>
>
>
>
> On Sat, Mar 9, 2019 at 12:44 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hj Drew,
>>
>> For the test change the gluster parameter nfs.disabled to false.
>> Something like gluster volume set volname nfs.dsiable false
>>
>> Then use shownount -e gluster-node-fqdn
>> Note: NFS might not be allowed in the firewall.
>>
>> Then add this NFS domain (don't forget to remove the gluster storage domain before that) and do your tests.
>>
>> If it works well, you will have to switch off nfs.disable and deploy NFS Ganesha:
>>
>> gluster volume reset volname nfs.disable
>>
>> Best Regards,
>> Strahil Nikolov
6 years
HE - engine gluster volume - not mounted
by Leo David
Hello Everyone,
Using 4.3.2 installation, and after running through HyperConverged Setup,
at the last stage it fails. It seems that the previously created "engine"
volume is not mounted under "/rhev" path, therefore the setup cannot finish
the deployment.
Any ideea which are the services responsible of mounting the volumes on
oVirt Node distribution ? I'm thinking that maybe this particularly one
failed to start for some reason...
Thank you very much !
--
Best regards, Leo David
6 years
Backup VMs to external USB Disk
by daniel94.oeller@gmail.com
Hi all,
I'm new in Ovirt and have tested a lot of things how to backup my VMS to a external USB Disk.
How have you solved this Problem - have anybody a Tutorial or something similar for me?
Thanks for your helps.
Daniel
6 years
Fwd: Re: HE - engine gluster volume - not mounted
by Leo David
---------- Forwarded message ---------
From: Leo David <leoalex(a)gmail.com>
Date: Tue, Apr 2, 2019, 15:10
Subject: Re: [ovirt-users] Re: HE - engine gluster volume - not mounted
To: Sahina Bose <sabose(a)redhat.com>
I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) , and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume, volume gets mounted under the correct path, but installation still
fails:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}
On the node's vdsm.log I can continuously see:
2019-04-02 13:02:18,832+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2019-04-02 13:02:21,737+0100 INFO (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
2019-04-02 13:02:21,738+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09
(api:54)
Should I perform an "engine-cleanup", delete lvms from Cockpit and start
it all over ?
Did anyone successfully used this particular iso image
"ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node
installation ?
Thank you !
Leo
On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose <sabose(a)redhat.com> wrote:
> Is it possible you have not cleared the gluster volume between installs?
>
> What's the corresponding error in vdsm.log?
>
>
> On Tue, Apr 2, 2019 at 4:07 PM Leo David <leoalex(a)gmail.com> wrote:
> >
> > And there it is the last lines on the ansible_create_storage_domain log:
> >
> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "<type 'dict'>" value: "{
> > "changed": false,
> > "exception": "Traceback (most recent call last):\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
> in main\n storage_domains_module.post_create_check(sd_id)\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
> in post_create_check\n id=storage_domain.id,\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\n return self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\n return future.wait() if wait else future\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\n return self._code(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\n self._check_fault(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\n self._raise_error(response, body)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\n raise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> > "failed": true,
> > "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[]\". HTTP response code is 400."
> > }"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "<type 'list'>" value: "[]"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "play_hosts" type "<type 'list'>" value: "[]"
> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
> "ansible_play_batch" type "<type 'list'>" value: "[]"
> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> u\'exception\': u\'Traceback (most recent call last):\\n File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664,
> in main\\n storage_domains_module.post_create_check(sd_id)\\n File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526',
> 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
> > 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args
> <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs
> ignore_errors:None
> > 2019-04-02 10:53:49,148+0100 INFO ansible stats {
> > "ansible_playbook":
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> > "ansible_playbook_duration": "01:15 Minutes",
> > "ansible_result": "type: <type 'dict'>\nstr: {u'localhost':
> {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}",
> > "ansible_type": "finish",
> > "status": "FAILED"
> > }
> > 2019-04-02 10:53:49,149+0100 INFO SUMMARY:
> > Duration Task Name
> > -------- --------
> > [ < 1 sec ] Execute just a specific set of steps
> > [ 00:02 ] Force facts gathering
> > [ 00:02 ] Check local VM dir stat
> > [ 00:02 ] Obtain SSO token using username/password credentials
> > [ 00:02 ] Fetch host facts
> > [ 00:01 ] Fetch cluster ID
> > [ 00:02 ] Fetch cluster facts
> > [ 00:02 ] Fetch Datacenter facts
> > [ 00:01 ] Fetch Datacenter ID
> > [ 00:01 ] Fetch Datacenter name
> > [ 00:02 ] Add glusterfs storage domain
> > [ 00:02 ] Get storage domain details
> > [ 00:02 ] Find the appliance OVF
> > [ 00:02 ] Parse OVF
> > [ 00:02 ] Get required size
> > [ FAILED ] Activate storage domain
> >
> > Any ideea on how to escalate this issue ?
> > It just does not make sense to not be able to install from scratch a
> fresh node...
> >
> > Have a nice day !
> >
> > Leo
> >
> >
> > On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas(a)redhat.com> wrote:
> >>
> >> Hi Leo,
> >> Can you please paste "df -Th" and "gluster v status" out put ?
> >> Wanted to make sure engine mounted and volumes and bricks are up.
> >> What does vdsm log say?
> >>
> >> On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex(a)gmail.com> wrote:
> >>>
> >>> Thank you very much !
> >>> I have just installed a new fresh node, and triggered the single
> instance hyperconverged setup. It seems it fails at the hosted engine final
> steps of deployment:
> >>> INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
> >>> [ INFO ] ok: [localhost]
> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
> domain]
> >>> [ INFO ] skipping: [localhost]
> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free
> space]
> >>> [ INFO ] skipping: [localhost]
> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> >>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[Cannot attach Storage. There is no active Host in the Data Center.]".
> HTTP response code is 409.
> >>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach
> Storage. There is no active Host in the Data Center.]\". HTTP response code
> is 409."}
> >>> Also, the
> ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
> throws the following:
> >>>
> >>> 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "<type 'dict'>" value: "{
> >>> "changed": false,
> >>> "exception": "Traceback (most recent call last):\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
> in main\n storage_domains_module.post_create_check(sd_id)\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
> in post_create_check\n id=storage_domain.id,\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\n return self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\n return future.wait() if wait else future\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\n return self._code(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\n self._check_fault(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\n self._raise_error(response, body)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\n raise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[Cannot attach Storage. There is no active Host in the
> Data Center.]\". HTTP response code is 409.\n",
> >>> "failed": true,
> >>> "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[Cannot attach Storage. There is no active Host in the Data Center.]\".
> HTTP response code is 409."
> >>> }"
> >>>
> >>> I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So
> far, I am unable to deploy oVirt single node Hyperconverged...
> >>> Any thoughts ?
> >>>
> >>>
> >>>
> >>> On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex(a)gmail.com> wrote:
> >>>>>
> >>>>> Thank you Simone.
> >>>>> I've decides to go for a new fresh install from iso, and i'll keep
> posted if any troubles arise. But I am still trying to understand what are
> the services that mount the lvms and volumes after configuration. There is
> nothing related in fstab, so I assume there are a couple of .mount files
> somewhere in the filesystem.
> >>>>> Im just trying to understand node's underneath workflow.
> >>>>
> >>>>
> >>>> hosted-engine configuration is stored in
> /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the
> hosted-engine storage domain according to that and so ovirt-ha-agent will
> be able to start the engine VM.
> >>>> Everything else is just in the engine DB.
> >>>>
> >>>>>
> >>>>>
> >>>>> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
> >>>>>>
> >>>>>> Hi,
> >>>>>> to understand what's failing I'd suggest to start attaching setup
> logs.
> >>>>>>
> >>>>>> On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex(a)gmail.com>
> wrote:
> >>>>>>>
> >>>>>>> Hello Everyone,
> >>>>>>> Using 4.3.2 installation, and after running through HyperConverged
> Setup, at the last stage it fails. It seems that the previously created
> "engine" volume is not mounted under "/rhev" path, therefore the setup
> cannot finish the deployment.
> >>>>>>> Any ideea which are the services responsible of mounting the
> volumes on oVirt Node distribution ? I'm thinking that maybe this
> particularly one failed to start for some reason...
> >>>>>>> Thank you very much !
> >>>>>>>
> >>>>>>> --
> >>>>>>> Best regards, Leo David
> >>>>>>> _______________________________________________
> >>>>>>> Users mailing list -- users(a)ovirt.org
> >>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
> >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>>>>>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>>>>>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZW...
> >>>
> >>>
> >>>
> >>> --
> >>> Best regards, Leo David
> >>> _______________________________________________
> >>> Users mailing list -- users(a)ovirt.org
> >>> To unsubscribe send an email to users-leave(a)ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDE...
> >>
> >>
> >>
> >> --
> >>
> >>
> >> Thanks,
> >> Gobinda
> >
> >
> >
> > --
> > Best regards, Leo David
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJNBS6EOXC...
>
--
Best regards, Leo David
6 years
trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys
by Matthias Leopold
Hi,
I upgraded my test environment to 4.3.2 and now I'm trying to set up a
"Managed Block Storage" domain with our Ceph 12.2 cluster. I think I got
all prerequisites, but when saving the configuration for the domain with
volume_driver "cinder.volume.drivers.rbd.RBDDriver" (and a couple of
other options) I get "VolumeBackendAPIException: Bad or unexpected
response from the storage volume backend API: Error connecting to ceph
cluster" in engine log (full error below). Unfortunately this is a
rather generic error message and I don't really know where to look next.
Accessing the rbd pool from the engine host with rbd CLI and the
configured "rbd_user" works flawlessly...
Although I don't think this is directly connected there is one other
question that comes up for me: how are libvirt "Authentication Keys"
handled with Ceph "Managed Block Storage" domains? With "standalone
Cinder" setups like we are using now you have to configure a "provider"
of type "OpenStack Block Storage" where you can configure these keys
that are referenced in cinder.conf as "rbd_secret_uuid". How is this
supposed to work now?
Thanks for any advice, we are using oVirt with Ceph heavily and are very
interested in a tight integration of oVirt and Ceph.
Matthias
2019-04-01 11:14:55,128+02 ERROR
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
(default task-22) [b6665621-6b85-438e-8c68-266f33e55d79] cinderlib
execution failed: Traceback (most recent call last):
File "./cinderlib-client.py", line 187, in main
args.command(args)
File "./cinderlib-client.py", line 275, in storage_stats
backend = load_backend(args)
File "./cinderlib-client.py", line 217, in load_backend
return cl.Backend(**json.loads(args.driver))
File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line
87, in __init__
self.driver.check_for_setup_error()
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 288, in check_for_setup_error
with RADOSClient(self):
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 170, in __init__
self.cluster, self.ioctx = driver._connect_to_rados(pool)
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 346, in _connect_to_rados
return _do_conn(pool, remote, timeout)
File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 799, in
_wrapper
return r.call(f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
raise attempt.get()
File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 344, in _do_conn
raise exception.VolumeBackendAPIException(data=msg)
VolumeBackendAPIException: Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.
6 years
Strange storage data center failure
by Fabrice Bacchella
I have a storage data center that I can't use. It's a local one.
When I look on vdsm.log:
2019-04-02 10:55:48,336+0200 INFO (jsonrpc/2) [vdsm.api] FINISH connectStoragePool error=Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, msdUUID=49b1bd15-486a-4064-878e-8030c8108e09' from=::ffff:XXXXX,59590, task_id=a56a5869-a219-4659-baa3-04f673b2ad55 (api:50)
2019-04-02 10:55:48,336+0200 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='a56a5869-a219-4659-baa3-04f673b2ad55') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in connectStoragePool
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1035, in connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1097, in _connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 700, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1274, in __rebuild
self.setMasterDomain(msdUUID, masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1495, in setMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, msdUUID=49b1bd15-486a-4064-878e-8030c8108e09'
2019-04-02 10:55:48,336+0200 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='a56a5869-a219-4659-baa3-04f673b2ad55') aborting: Task is aborted: "Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, msdUUID=49b1bd15-486a-4064-878e-8030c8108e09'" - code 304 (task:1181)
2019-04-02 11:44:50,862+0200 INFO (jsonrpc/0) [vdsm.api] FINISH getSpmStatus error=Unknown pool id, pool not connected: (u'063d1217-6194-48a0-943e-3d873f2147de',) from=::ffff:10.83.16.34,46546, task_id=cfb1c871-b1d4-4b1a-b2a5-f91ddfaba
54b (api:50)
2019-04-02 11:44:50,862+0200 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='cfb1c871-b1d4-4b1a-b2a5-f91ddfaba54b') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in getSpmStatus
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 634, in getSpmStatus
pool = self.getPool(spUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 350, in getPool
raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: (u'063d1217-6194-48a0-943e-3d873f2147de',)
063d1217-6194-48a0-943e-3d873f2147de is indeed the datacenter id and 49b1bd15-486a-4064-878e-8030c8108e09 the storage domain:
<storage_domain href="/ovirt-engine/api/storagedomains/49b1bd15-486a-4064-878e-8030c8108e09" id="49b1bd15-486a-4064-878e-8030c8108e09">
<storage>
<type>fcp</type>
<volume_group id="cEOxNG-R4fv-EDo3-Y3lm-ZMAH-xOT4-Nn8QyP">
<logical_units>
</logical_units>
</volume_group>
</storage>
<storage_format>v4</storage_format>
<data_centers>
<data_center href="/ovirt-engine/api/datacenters/063d1217-6194-48a0-943e-3d873f2147de" id="063d1217-6194-48a0-943e-3d873f2147de"/>
</data_centers>
</storage_domain>
On engine.log, I'm also getting:
2019-04-02 11:43:57,531+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand' return value '
TaskStatusListReturn:{status='Status [code=654, message=Not SPM: ()]'}
'
lsblk shows that the requested volumes are here:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
cciss!c0d1 104:16 0 1.9T 0 disk
|-49b1bd15--486a--4064--878e--8030c8108e09-metadata 253:0 0 512M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-outbox 253:1 0 128M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-xleases 253:2 0 1G 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-leases 253:3 0 2G 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-ids 253:4 0 128M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-inbox 253:5 0 128M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-master 253:6 0 1G 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-6225ddc3--b600--49ef--8de4--6e53bf4cad1f 253:7 0 128M 0 lvm
`-49b1bd15--486a--4064--878e--8030c8108e09-bdac3a3a--8633--41bf--921d--db2cf31f5d1c 253:8 0 128M 0 lvm
There is no usefull data on them. So I don't mind destroying everything. But using the GUI I can't do any thing on that domain.
I'm runing 4.2.7.5-1.el7 on rhel 7.6
6 years
oVirt Survey 2019 results
by Sandro Bonazzola
Thanks to the 143 participants to oVirt Survey 2019!
The survey is now closed and results are publicly available at
https://bit.ly/2JYlI7U
We'll analyze collected data in order to improve oVirt thanks to your
feedback.
As a first step after reading the results I'd like to invite the 30 persons
who replied they're willing to contribute code to send an email to
devel(a)ovirt.org introducing themselves: we'll be more than happy to welcome
them and helping them getting started.
I would also like to invite the 17 people who replied they'd like to help
organizing oVirt events in their area to either get in touch with me or
introduce themselves to users(a)ovirt.org so we can discuss about events
organization.
Last but not least I'd like to invite the 38 people willing to contribute
documentation and the one willing to contribute localization to introduce
themselves to devel(a)ovirt.org.
Thanks!
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years
Not being able to create new partition with fdisk
by pawel.zajac@mikrosimage.com
Hi,
I am not able to add new or extend a partitions under CentOS 7 VM in oVirt 4.2.7.5 - 1.el7
Once I get to write to partition table the system pauses with error.
"VM XXXX has been paused due to unknown storage error."
Can't see much in the log, messages doesn't even signaled it.
The VM just pauses.
VM storage is on a NFS share, so not a block device, like most of the issues I found.
Did search web for similar issue, bit no luck so far.
Any ideas?
Thank you.
Best,
Pawel
6 years