HE - engine gluster volume - not mounted
by Leo David
Hello Everyone,
Using 4.3.2 installation, and after running through HyperConverged Setup,
at the last stage it fails. It seems that the previously created "engine"
volume is not mounted under "/rhev" path, therefore the setup cannot finish
the deployment.
Any ideea which are the services responsible of mounting the volumes on
oVirt Node distribution ? I'm thinking that maybe this particularly one
failed to start for some reason...
Thank you very much !
--
Best regards, Leo David
5 years, 8 months
Backup VMs to external USB Disk
by daniel94.oeller@gmail.com
Hi all,
I'm new in Ovirt and have tested a lot of things how to backup my VMS to a external USB Disk.
How have you solved this Problem - have anybody a Tutorial or something similar for me?
Thanks for your helps.
Daniel
5 years, 8 months
Fwd: Re: HE - engine gluster volume - not mounted
by Leo David
---------- Forwarded message ---------
From: Leo David <leoalex(a)gmail.com>
Date: Tue, Apr 2, 2019, 15:10
Subject: Re: [ovirt-users] Re: HE - engine gluster volume - not mounted
To: Sahina Bose <sabose(a)redhat.com>
I have deleted everything in the engine gluster mount path, unmounted the
engine gluster volume ( not deleted the volume ) , and started the wizard
with "Use already configured storage". I have pointed to use this gluster
volume, volume gets mounted under the correct path, but installation still
fails:
[ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[]".
HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault
reason is \"Operation Failed\". Fault detail is \"[]\". HTTP response code
is 400."}
On the node's vdsm.log I can continuously see:
2019-04-02 13:02:18,832+0100 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Host.getStats succeeded in 0.03 seconds (__init__:312)
2019-04-02 13:02:19,906+0100 INFO (vmrecovery) [vdsm.api] START
getConnectedStoragePoolsList(options=None) from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:48)
2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vdsm.api] FINISH
getConnectedStoragePoolsList return={'poollist': []} from=internal,
task_id=769c3983-5160-44e4-b1d8-7ab4e41ddd34 (api:54)
2019-04-02 13:02:19,907+0100 INFO (vmrecovery) [vds] recovery: waiting for
storage pool to go up (clientIF:709)
2019-04-02 13:02:21,737+0100 INFO (periodic/2) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09 (api:48)
2019-04-02 13:02:21,738+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats
return={} from=internal, task_id=ba12fbc1-0170-41a2-82e6-8ccb05ae9e09
(api:54)
Should I perform an "engine-cleanup", delete lvms from Cockpit and start
it all over ?
Did anyone successfully used this particular iso image
"ovirt-node-ng-installer-4.3.2-2019031908.el7.iso" for a single node
installation ?
Thank you !
Leo
On Tue, Apr 2, 2019 at 1:45 PM Sahina Bose <sabose(a)redhat.com> wrote:
> Is it possible you have not cleared the gluster volume between installs?
>
> What's the corresponding error in vdsm.log?
>
>
> On Tue, Apr 2, 2019 at 4:07 PM Leo David <leoalex(a)gmail.com> wrote:
> >
> > And there it is the last lines on the ansible_create_storage_domain log:
> >
> > 2019-04-02 10:53:49,139+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "<type 'dict'>" value: "{
> > "changed": false,
> > "exception": "Traceback (most recent call last):\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 664,
> in main\n storage_domains_module.post_create_check(sd_id)\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py\", line 526,
> in post_create_check\n id=storage_domain.id,\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\n return self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\n return future.wait() if wait else future\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\n return self._code(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\n self._check_fault(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\n self._raise_error(response, body)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\n raise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[]\". HTTP response code is 400.\n",
> > "failed": true,
> > "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[]\". HTTP response code is 400."
> > }"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "ansible_play_hosts" type "<type 'list'>" value: "[]"
> > 2019-04-02 10:53:49,141+0100 DEBUG var changed: host "localhost" var
> "play_hosts" type "<type 'list'>" value: "[]"
> > 2019-04-02 10:53:49,142+0100 DEBUG var changed: host "localhost" var
> "ansible_play_batch" type "<type 'list'>" value: "[]"
> > 2019-04-02 10:53:49,142+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Activate storage domain',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> u\'exception\': u\'Traceback (most recent call last):\\n File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 664,
> in main\\n storage_domains_module.post_create_check(sd_id)\\n File
> "/tmp/ansible_ovirt_storage_domain_payload_6Jxg5v/__main__.py", line 526',
> 'task_duration': 9, 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
> > 2019-04-02 10:53:49,143+0100 DEBUG ansible on_any args
> <ansible.executor.task_result.TaskResult object at 0x7f03fd025e50> kwargs
> ignore_errors:None
> > 2019-04-02 10:53:49,148+0100 INFO ansible stats {
> > "ansible_playbook":
> "/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> > "ansible_playbook_duration": "01:15 Minutes",
> > "ansible_result": "type: <type 'dict'>\nstr: {u'localhost':
> {'unreachable': 0, 'skipped': 6, 'ok': 23, 'changed': 0, 'failures': 1}}",
> > "ansible_type": "finish",
> > "status": "FAILED"
> > }
> > 2019-04-02 10:53:49,149+0100 INFO SUMMARY:
> > Duration Task Name
> > -------- --------
> > [ < 1 sec ] Execute just a specific set of steps
> > [ 00:02 ] Force facts gathering
> > [ 00:02 ] Check local VM dir stat
> > [ 00:02 ] Obtain SSO token using username/password credentials
> > [ 00:02 ] Fetch host facts
> > [ 00:01 ] Fetch cluster ID
> > [ 00:02 ] Fetch cluster facts
> > [ 00:02 ] Fetch Datacenter facts
> > [ 00:01 ] Fetch Datacenter ID
> > [ 00:01 ] Fetch Datacenter name
> > [ 00:02 ] Add glusterfs storage domain
> > [ 00:02 ] Get storage domain details
> > [ 00:02 ] Find the appliance OVF
> > [ 00:02 ] Parse OVF
> > [ 00:02 ] Get required size
> > [ FAILED ] Activate storage domain
> >
> > Any ideea on how to escalate this issue ?
> > It just does not make sense to not be able to install from scratch a
> fresh node...
> >
> > Have a nice day !
> >
> > Leo
> >
> >
> > On Tue, Apr 2, 2019 at 12:11 PM Gobinda Das <godas(a)redhat.com> wrote:
> >>
> >> Hi Leo,
> >> Can you please paste "df -Th" and "gluster v status" out put ?
> >> Wanted to make sure engine mounted and volumes and bricks are up.
> >> What does vdsm log say?
> >>
> >> On Tue, Apr 2, 2019 at 2:06 PM Leo David <leoalex(a)gmail.com> wrote:
> >>>
> >>> Thank you very much !
> >>> I have just installed a new fresh node, and triggered the single
> instance hyperconverged setup. It seems it fails at the hosted engine final
> steps of deployment:
> >>> INFO ] TASK [ovirt.hosted_engine_setup : Get required size]
> >>> [ INFO ] ok: [localhost]
> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Remove unsuitable storage
> domain]
> >>> [ INFO ] skipping: [localhost]
> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Check storage domain free
> space]
> >>> [ INFO ] skipping: [localhost]
> >>> [ INFO ] TASK [ovirt.hosted_engine_setup : Activate storage domain]
> >>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is
> "[Cannot attach Storage. There is no active Host in the Data Center.]".
> HTTP response code is 409.
> >>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "Fault reason is \"Operation Failed\". Fault detail is \"[Cannot attach
> Storage. There is no active Host in the Data Center.]\". HTTP response code
> is 409."}
> >>> Also, the
> ovirt-hosted-engine-setup-ansible-create_storage_domain-201932112413-xkq6nb.log
> throws the following:
> >>>
> >>> 2019-04-02 09:25:40,420+0100 DEBUG var changed: host "localhost" var
> "otopi_storage_domain_details" type "<type 'dict'>" value: "{
> >>> "changed": false,
> >>> "exception": "Traceback (most recent call last):\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 664,
> in main\n storage_domains_module.post_create_check(sd_id)\n File
> \"/tmp/ansible_ovirt_storage_domain_payload_87MSyY/__main__.py\", line 526,
> in post_create_check\n id=storage_domain.id,\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py\", line 3053, in
> add\n return self._internal_add(storage_domain, headers, query, wait)\n
> File \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 232,
> in _internal_add\n return future.wait() if wait else future\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 55, in
> wait\n return self._code(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 229, in
> callback\n self._check_fault(response)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 132, in
> _check_fault\n self._raise_error(response, body)\n File
> \"/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py\", line 118, in
> _raise_error\n raise error\nError: Fault reason is \"Operation Failed\".
> Fault detail is \"[Cannot attach Storage. There is no active Host in the
> Data Center.]\". HTTP response code is 409.\n",
> >>> "failed": true,
> >>> "msg": "Fault reason is \"Operation Failed\". Fault detail is
> \"[Cannot attach Storage. There is no active Host in the Data Center.]\".
> HTTP response code is 409."
> >>> }"
> >>>
> >>> I have used the ovirt-node-ng-installer-4.3.2-2019031908.el7.iso. So
> far, I am unable to deploy oVirt single node Hyperconverged...
> >>> Any thoughts ?
> >>>
> >>>
> >>>
> >>> On Mon, Apr 1, 2019 at 9:46 PM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
> >>>>
> >>>>
> >>>>
> >>>> On Mon, Apr 1, 2019 at 6:14 PM Leo David <leoalex(a)gmail.com> wrote:
> >>>>>
> >>>>> Thank you Simone.
> >>>>> I've decides to go for a new fresh install from iso, and i'll keep
> posted if any troubles arise. But I am still trying to understand what are
> the services that mount the lvms and volumes after configuration. There is
> nothing related in fstab, so I assume there are a couple of .mount files
> somewhere in the filesystem.
> >>>>> Im just trying to understand node's underneath workflow.
> >>>>
> >>>>
> >>>> hosted-engine configuration is stored in
> /etc/ovirt-hosted-engine/hosted-engine.conf; ovirt-ha-broker will mount the
> hosted-engine storage domain according to that and so ovirt-ha-agent will
> be able to start the engine VM.
> >>>> Everything else is just in the engine DB.
> >>>>
> >>>>>
> >>>>>
> >>>>> On Mon, Apr 1, 2019, 10:16 Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
> >>>>>>
> >>>>>> Hi,
> >>>>>> to understand what's failing I'd suggest to start attaching setup
> logs.
> >>>>>>
> >>>>>> On Sun, Mar 31, 2019 at 5:06 PM Leo David <leoalex(a)gmail.com>
> wrote:
> >>>>>>>
> >>>>>>> Hello Everyone,
> >>>>>>> Using 4.3.2 installation, and after running through HyperConverged
> Setup, at the last stage it fails. It seems that the previously created
> "engine" volume is not mounted under "/rhev" path, therefore the setup
> cannot finish the deployment.
> >>>>>>> Any ideea which are the services responsible of mounting the
> volumes on oVirt Node distribution ? I'm thinking that maybe this
> particularly one failed to start for some reason...
> >>>>>>> Thank you very much !
> >>>>>>>
> >>>>>>> --
> >>>>>>> Best regards, Leo David
> >>>>>>> _______________________________________________
> >>>>>>> Users mailing list -- users(a)ovirt.org
> >>>>>>> To unsubscribe send an email to users-leave(a)ovirt.org
> >>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>>>>>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>>>>>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUXDAQHVNZW...
> >>>
> >>>
> >>>
> >>> --
> >>> Best regards, Leo David
> >>> _______________________________________________
> >>> Users mailing list -- users(a)ovirt.org
> >>> To unsubscribe send an email to users-leave(a)ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NROCMKIFJDE...
> >>
> >>
> >>
> >> --
> >>
> >>
> >> Thanks,
> >> Gobinda
> >
> >
> >
> > --
> > Best regards, Leo David
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XDJNBS6EOXC...
>
--
Best regards, Leo David
5 years, 8 months
trying to use Managed Block Storage in 4.3.2 with Ceph / Authentication Keys
by Matthias Leopold
Hi,
I upgraded my test environment to 4.3.2 and now I'm trying to set up a
"Managed Block Storage" domain with our Ceph 12.2 cluster. I think I got
all prerequisites, but when saving the configuration for the domain with
volume_driver "cinder.volume.drivers.rbd.RBDDriver" (and a couple of
other options) I get "VolumeBackendAPIException: Bad or unexpected
response from the storage volume backend API: Error connecting to ceph
cluster" in engine log (full error below). Unfortunately this is a
rather generic error message and I don't really know where to look next.
Accessing the rbd pool from the engine host with rbd CLI and the
configured "rbd_user" works flawlessly...
Although I don't think this is directly connected there is one other
question that comes up for me: how are libvirt "Authentication Keys"
handled with Ceph "Managed Block Storage" domains? With "standalone
Cinder" setups like we are using now you have to configure a "provider"
of type "OpenStack Block Storage" where you can configure these keys
that are referenced in cinder.conf as "rbd_secret_uuid". How is this
supposed to work now?
Thanks for any advice, we are using oVirt with Ceph heavily and are very
interested in a tight integration of oVirt and Ceph.
Matthias
2019-04-01 11:14:55,128+02 ERROR
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
(default task-22) [b6665621-6b85-438e-8c68-266f33e55d79] cinderlib
execution failed: Traceback (most recent call last):
File "./cinderlib-client.py", line 187, in main
args.command(args)
File "./cinderlib-client.py", line 275, in storage_stats
backend = load_backend(args)
File "./cinderlib-client.py", line 217, in load_backend
return cl.Backend(**json.loads(args.driver))
File "/usr/lib/python2.7/site-packages/cinderlib/cinderlib.py", line
87, in __init__
self.driver.check_for_setup_error()
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 288, in check_for_setup_error
with RADOSClient(self):
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 170, in __init__
self.cluster, self.ioctx = driver._connect_to_rados(pool)
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 346, in _connect_to_rados
return _do_conn(pool, remote, timeout)
File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 799, in
_wrapper
return r.call(f, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/retrying.py", line 229, in call
raise attempt.get()
File "/usr/lib/python2.7/site-packages/retrying.py", line 261, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/usr/lib/python2.7/site-packages/retrying.py", line 217, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py",
line 344, in _do_conn
raise exception.VolumeBackendAPIException(data=msg)
VolumeBackendAPIException: Bad or unexpected response from the storage
volume backend API: Error connecting to ceph cluster.
5 years, 8 months
Strange storage data center failure
by Fabrice Bacchella
I have a storage data center that I can't use. It's a local one.
When I look on vdsm.log:
2019-04-02 10:55:48,336+0200 INFO (jsonrpc/2) [vdsm.api] FINISH connectStoragePool error=Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, msdUUID=49b1bd15-486a-4064-878e-8030c8108e09' from=::ffff:XXXXX,59590, task_id=a56a5869-a219-4659-baa3-04f673b2ad55 (api:50)
2019-04-02 10:55:48,336+0200 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='a56a5869-a219-4659-baa3-04f673b2ad55') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in connectStoragePool
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1035, in connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1097, in _connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 700, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1274, in __rebuild
self.setMasterDomain(msdUUID, masterVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1495, in setMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, msdUUID=49b1bd15-486a-4064-878e-8030c8108e09'
2019-04-02 10:55:48,336+0200 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='a56a5869-a219-4659-baa3-04f673b2ad55') aborting: Task is aborted: "Cannot find master domain: u'spUUID=063d1217-6194-48a0-943e-3d873f2147de, msdUUID=49b1bd15-486a-4064-878e-8030c8108e09'" - code 304 (task:1181)
2019-04-02 11:44:50,862+0200 INFO (jsonrpc/0) [vdsm.api] FINISH getSpmStatus error=Unknown pool id, pool not connected: (u'063d1217-6194-48a0-943e-3d873f2147de',) from=::ffff:10.83.16.34,46546, task_id=cfb1c871-b1d4-4b1a-b2a5-f91ddfaba
54b (api:50)
2019-04-02 11:44:50,862+0200 ERROR (jsonrpc/0) [storage.TaskManager.Task] (Task='cfb1c871-b1d4-4b1a-b2a5-f91ddfaba54b') Unexpected error (task:875)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
return fn(*args, **kargs)
File "<string>", line 2, in getSpmStatus
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 634, in getSpmStatus
pool = self.getPool(spUUID)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 350, in getPool
raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: (u'063d1217-6194-48a0-943e-3d873f2147de',)
063d1217-6194-48a0-943e-3d873f2147de is indeed the datacenter id and 49b1bd15-486a-4064-878e-8030c8108e09 the storage domain:
<storage_domain href="/ovirt-engine/api/storagedomains/49b1bd15-486a-4064-878e-8030c8108e09" id="49b1bd15-486a-4064-878e-8030c8108e09">
<storage>
<type>fcp</type>
<volume_group id="cEOxNG-R4fv-EDo3-Y3lm-ZMAH-xOT4-Nn8QyP">
<logical_units>
</logical_units>
</volume_group>
</storage>
<storage_format>v4</storage_format>
<data_centers>
<data_center href="/ovirt-engine/api/datacenters/063d1217-6194-48a0-943e-3d873f2147de" id="063d1217-6194-48a0-943e-3d873f2147de"/>
</data_centers>
</storage_domain>
On engine.log, I'm also getting:
2019-04-02 11:43:57,531+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-12) [] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand' return value '
TaskStatusListReturn:{status='Status [code=654, message=Not SPM: ()]'}
'
lsblk shows that the requested volumes are here:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
cciss!c0d1 104:16 0 1.9T 0 disk
|-49b1bd15--486a--4064--878e--8030c8108e09-metadata 253:0 0 512M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-outbox 253:1 0 128M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-xleases 253:2 0 1G 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-leases 253:3 0 2G 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-ids 253:4 0 128M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-inbox 253:5 0 128M 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-master 253:6 0 1G 0 lvm
|-49b1bd15--486a--4064--878e--8030c8108e09-6225ddc3--b600--49ef--8de4--6e53bf4cad1f 253:7 0 128M 0 lvm
`-49b1bd15--486a--4064--878e--8030c8108e09-bdac3a3a--8633--41bf--921d--db2cf31f5d1c 253:8 0 128M 0 lvm
There is no usefull data on them. So I don't mind destroying everything. But using the GUI I can't do any thing on that domain.
I'm runing 4.2.7.5-1.el7 on rhel 7.6
5 years, 8 months
oVirt Survey 2019 results
by Sandro Bonazzola
Thanks to the 143 participants to oVirt Survey 2019!
The survey is now closed and results are publicly available at
https://bit.ly/2JYlI7U
We'll analyze collected data in order to improve oVirt thanks to your
feedback.
As a first step after reading the results I'd like to invite the 30 persons
who replied they're willing to contribute code to send an email to
devel(a)ovirt.org introducing themselves: we'll be more than happy to welcome
them and helping them getting started.
I would also like to invite the 17 people who replied they'd like to help
organizing oVirt events in their area to either get in touch with me or
introduce themselves to users(a)ovirt.org so we can discuss about events
organization.
Last but not least I'd like to invite the 38 people willing to contribute
documentation and the one willing to contribute localization to introduce
themselves to devel(a)ovirt.org.
Thanks!
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 8 months
Not being able to create new partition with fdisk
by pawel.zajac@mikrosimage.com
Hi,
I am not able to add new or extend a partitions under CentOS 7 VM in oVirt 4.2.7.5 - 1.el7
Once I get to write to partition table the system pauses with error.
"VM XXXX has been paused due to unknown storage error."
Can't see much in the log, messages doesn't even signaled it.
The VM just pauses.
VM storage is on a NFS share, so not a block device, like most of the issues I found.
Did search web for similar issue, bit no luck so far.
Any ideas?
Thank you.
Best,
Pawel
5 years, 8 months