No problem :)
Is it possible to migrate existing vms to managed block storage?
We do not have OVF support or stuff like that for MBS domains, you can
attach MBS disks to existing VMs
Or do you mean moving/copying existing disks to an MBS domain? in this case
the answer is unfortunately no
Also is it possible to host the hosted engine on this storage?
Unfortunately no
On Tue, Jul 9, 2019 at 4:57 PM Dan Poltawski <dan.poltawski(a)tnp.net.uk>
wrote:
On Tue, 2019-07-09 at 11:12 +0300, Benny Zlotnik wrote:
> VM live migration is supported and should work
> Can you add engine and cinderlib logs?
Sorry - looks like once again this was a misconfig by me on the ceph
side..
Is it possible to migrate existing vms to managed block storage? Also
is it possible to host the hosted engine on this storage?
Thanks Again for your help,
Dan
>
> On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski <
> dan.poltawski(a)tnp.net.uk> wrote:
> > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > > I've now managed to succesfully create/mount/delete volumes!
> >
> > However, I'm seeing live migrations stay stuck. Is this supported?
> >
> > (gdb) py-list
> > 345 client.conf_set('rados_osd_op_timeout',
> > timeout)
> > 346 client.conf_set('rados_mon_op_timeout',
> > timeout)
> > 347 client.conf_set('client_mount_timeout',
> > timeout)
> > 348
> > 349 client.connect()
> > >350 ioctx = client.open_ioctx(pool)
> > 351 return client, ioctx
> > 352 except self.rados.Error:
> > 353 msg = _("Error connecting to ceph
> > cluster.")
> > 354 LOG.exception(msg)
> > 355 client.shutdown()
> >
> >
> > (gdb) py-bt
> > #15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
> > (pool='storage-ssd', remote=None, timeout=-1, name='ceph',
> > conf='/etc/ceph/ceph.conf', user='ovirt',
client=<rados.Rados at
> > remote
> > 0x7fb1f4f83a60>)
> > ioctx = client.open_ioctx(pool)
> > #20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
> > packages/retrying.py, line 217, in call
> > (self=<Retrying(_retry_on_exception=<function at remote
> > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > _wait_incrementing_start=0, stop=<function at remote
> > 0x7fb1f4f23578>,
> > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > _wait_random_max=1000, _retry_on_result=<instancemethod at remote
> > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > _wrap_exception=False, _wait_random_min=0,
> > _wait_exponential_multiplier=1, wait=<function at remote
> > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>, fn=<function at remote
> > 0x7fb1f4f23668>, args=(None, None, None), kwargs={},
> > start_time=1562658179214, attempt_number=1)
> > attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
> > #25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
> > packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
> > None), kwargs={}, r=<Retrying(_retry_on_exception=<function at
> > remote
> > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > _wait_incrementing_start=0, stop=<function at remote
> > 0x7fb1f4f23578>,
> > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > _wait_random_max=1000, _retry_on_result=<instancemethod at remote
> > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > _wrap_exception=False, _wait_random_min=0,
> > _wait_exponential_multiplier=1, wait=<function at remote
> > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>)
> > return r.call(f, *args, **kwargs)
> > #29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 358, in
> > _connect_to_rados
> > (self=<RBDDriver(_target_names=[], rbd=<module at remote
> > 0x7fb20583e830>, _is_replication_enabled=False, _execute=<function
> > at
> > remote 0x7fb2041242a8>, _active_config={'name': 'ceph',
'conf':
> > '/etc/ceph/ceph.conf', 'user': 'ovirt'},
_active_backend_id=None,
> > _initialized=False, db=<DBAPI(_backend=<module at remote
> > 0x7fb203f8d520>, qos_specs_get=<instancemethod at remote
> > 0x7fb1f677d460>, _lock=<Semaphore(counter=1,
> > _waiters=<collections.deque at remote 0x7fb1f5246d70>) at remote
> > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > 'inc_retry_interval': True, 'retry_interval': 1,
> > 'max_retry_interval':
> > 10}, _backend_mapping={'sqlalchemy':
'cinder.db.sqlalchemy.api'},
> > _backend_name='sqlalchemy', use_db_reconnect=False,
> > get_by_id=<instancemethod at remote 0x7fb1f61d8050>,
> > volume_type_get=<instancemethod at remote 0x7fb1f61c0f50>) at
> > remote
> > 0x7fb2003aab10>, target_mapping={'tgtadm':
> > 'cinder.vol...(truncated)
> > return _do_conn(pool, remote, timeout)
> > #33 Frame 0x7fb1f4f5b220, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 177, in __init__
> > (self=<RADOSClient(driver=<RBDDriver(_target_names=[], rbd=<module
> > at
> > remote 0x7fb20583e830>, _is_replication_enabled=False,
> > _execute=<function at remote 0x7fb2041242a8>,
> > _active_config={'name':
> > 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user':
'ovirt'},
> > _active_backend_id=None, _initialized=False,
> > db=<DBAPI(_backend=<module
> > at remote 0x7fb203f8d520>, qos_specs_get=<instancemethod at remote
> > 0x7fb1f677d460>, _lock=<Semaphore(counter=1,
> > _waiters=<collections.deque at remote 0x7fb1f5246d70>) at remote
> > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > 'inc_retry_interval': True, 'retry_interval': 1,
> > 'max_retry_interval':
> > 10}, _backend_mapping={'sqlalchemy':
'cinder.db.sqlalchemy.api'},
> > _backend_name='sqlalchemy', use_db_reconnect=False,
> > get_by_id=<instancemethod at remote 0x7fb1f61d8050>,
> > volume_type_get=<instancemethod at remote 0x7fb1f61c0f50>) at
> > remote
> > 0x7fb2003aab10>, target_mapping={'tgtadm': ...(truncated)
> > self.cluster, self.ioctx = driver._connect_to_rados(pool)
> > #44 Frame 0x7fb1f4f9a620, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 298, in
> > check_for_setup_error (self=<RBDDriver(_target_names=[],
> > rbd=<module at
> > remote 0x7fb20583e830>, _is_replication_enabled=False,
> > _execute=<function at remote 0x7fb2041242a8>,
> > _active_config={'name':
> > 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user':
'ovirt'},
> > _active_backend_id=None, _initialized=False,
> > db=<DBAPI(_backend=<module
> > at remote 0x7fb203f8d520>, qos_specs_get=<instancemethod at remote
> > 0x7fb1f677d460>, _lock=<Semaphore(counter=1,
> > _waiters=<collections.deque at remote 0x7fb1f5246d70>) at remote
> > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > 'inc_retry_interval': True, 'retry_interval': 1,
> > 'max_retry_interval':
> > 10}, _backend_mapping={'sqlalchemy':
'cinder.db.sqlalchemy.api'},
> > _backend_name='sqlalchemy', use_db_reconnect=False,
> > get_by_id=<instancemethod at remote 0x7fb1f61d8050>,
> > volume_type_get=<instancemethod at remote 0x7fb1f61c0f50>) at
> > remote
> > 0x7fb2003aab10>, target_mapping={'tgtadm':
'cinder...(truncated)
> > with RADOSClient(self):
> > #48 Frame 0x3e5bef0, for file /usr/lib/python2.7/site-
> > packages/cinderlib/cinderlib.py, line 88, in __init__
> > (self=<Backend(driver=<RBDDriver(_target_names=[], rbd=<module at
> > remote 0x7fb20583e830>, _is_replication_enabled=False,
> > _execute=<function at remote 0x7fb2041242a8>,
> > _active_config={'name':
> > 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user':
'ovirt'},
> > _active_backend_id=None, _initialized=False,
> > db=<DBAPI(_backend=<module
> > at remote 0x7fb203f8d520>, qos_specs_get=<instancemethod at remote
> > 0x7fb1f677d460>, _lock=<Semaphore(counter=1,
> > _waiters=<collections.deque at remote 0x7fb1f5246d70>) at remote
> > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > 'inc_retry_interval': True, 'retry_interval': 1,
> > 'max_retry_interval':
> > 10}, _backend_mapping={'sqlalchemy':
'cinder.db.sqlalchemy.api'},
> > _backend_name='sqlalchemy', use_db_reconnect=False,
> > get_by_id=<instancemethod at remote 0x7fb1f61d8050>,
> > volume_type_get=<instancemethod at remote 0x7fb1f61c0f50>) at
> > remote
> > 0x7fb2003aab10>, target_mapping={'tgtadm':
> > 'cinder.volume.t...(truncated)
> > self.driver.check_for_setup_error()
> > #58 Frame 0x3d15cf0, for file ./cinderlib-client.py, line 210, in
> > load_backend
(args=<Namespace(driver='{"volume_backend_name":"ceph-
> > storage-ssd-ovirt","volume---Type <return> to continue, or q
> > <return>
> > to quit---
> >
_driver":"cinder.volume.drivers.rbd.RBDDriver","rbd_ceph_conf":"/et
> > c/ce
> >
ph/ceph.conf","rbd_user":"ovirt","rbd_keyring_conf":"/etc/ceph/ceph
> > .cli
> > ent.ovirt.keyring","rbd_pool":"storage-
> > ssd","use_multipath_for_image_xfer":"true"}',
> > db_url='postgresql+psycopg2://ovirt_cinderlib:ViRsNB3Dnwy5wmL0lLuDE
> > q@lo
> > calhost:5432/ovirt_cinderlib', correlation_id='7733e7cc',
> > command=<function at remote 0x7fb1f52725f0>, volume_id='9d858d39-
> > bbd4-
> > 4cbe-9f2c-5bef25ed0525',
> >
connector_info='{"ip":null,"host":"compute00.hq.tnp.infra","os_type
> > ":"l
> >
inux2","platform":"x86_64","initiator":"iqn.1994-
> >
05.com.redhat:c6ab662d439c","multipath":true,"do_local_attach":fals
> > e}')
> > at remote 0x7fb1f5215910>, persistence_config={'connection':
> > 'postgresql+psycopg2://ovirt_cinderlib:ViRsNB3Dnwy5wmL0lLuDEq@local
> > host
> > :5432/ovirt_cinderlib', 'storage': 'db'})
> > Python Exception <type 'exceptions.IOError'> [Errno 2] No such
file
> > or
> > directory: './cinderlib-client.py':
> > Error occurred in Python command: [Errno 2] No such file or
> > directory:
> > './cinderlib-client.py'
> >
> >
> >
> >
> > ________________________________
> >
> > The Networking People (TNP) Limited. Registered office: Network
> > House, Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales
> > with company number: 07667393
> >
> > This email and any files transmitted with it are confidential and
> > intended solely for the use of the individual or entity to whom
> > they are addressed. If you have received this email in error please
> > notify the system manager. This message contains confidential
> > information and is intended only for the individual named. If you
> > are not the named addressee you should not disseminate, distribute
> > or copy this e-mail. Please notify the sender immediately by e-mail
> > if you have received this e-mail by mistake and delete this e-mail
> > from your system. If you are not the intended recipient you are
> > notified that disclosing, copying, distributing or taking any
> > action in reliance on the contents of this information is strictly
> > prohibited.
________________________________
The Networking People (TNP) Limited. Registered office: Network House,
Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
number: 07667393
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager.
This message contains confidential information and is intended only for the
individual named. If you are not the named addressee you should not
disseminate, distribute or copy this e-mail. Please notify the sender
immediately by e-mail if you have received this e-mail by mistake and
delete this e-mail from your system. If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any
action in reliance on the contents of this information is strictly
prohibited.