I would offer to do it, but I've got a lot on my plate right now and I
think you could probably fix the blocking issue pretty quickly.
Thanks!
On 10/09/2014 07:48 AM, Aline Manera wrote:
> On 10/09/2014 06:04 AM, Royce Lv wrote:
>> On 2014年10月03日 03:17, Christy Perez wrote:
>>> On 09/29/2014 01:24 AM, Royce Lv wrote:
>>>> On 2014年09月27日 00:09, Christy Perez wrote:
>>>>> Thanks Royce! A few questions below...
>>>>>
>>>>> On 09/26/2014 04:48 AM, Royce Lv wrote:
>>>>>> On 2014年09月25日 06:29, Christy Perez wrote:
>>>>>>> The concept of a ref count for disks had been added, but this
count
>>>>>>> was dependant upon the pool name for a disk. However, the
pool name
>>>>>>> wasn't being stored in the VM XML so it wasn't
possible to retreive
>>>>>>> the pool for the disk in order to update the ref count.
>>>>>> Thanks for the fix, I noticed this for a while but always forget
to
>>>>>> document.
>>>>>>
>>>>>> I think use 'pool' and 'vol' is a nice way, still
we need to check if
>>>>>> this works well in all distros for all storage types(iscsi,
>>>>>> logical etc)
>>>>>> after all according to libvirt doc, this has been introduced in
>>>>>> 1.0.5,
>>>>>> and 'mode' attr in 1.1.1,
>>>>>> not sure for powerkvm and rhel it can work.
>>>>> Absolutely. That's the big reason I sent this as an RFC. I have
tested
>>>>> it on x86 on Fedora, and powerKVM, but only with file-based storage.
>>>>> There's a lot more testing to do, and I wanted to make sure
before
>>>>> I get
>>>>> too deep into that, that no one objected to any of the changes
I'd
>>>>> made.
>>>>>
>>>>>> If we are going to adopt current way, we can just update the ref
>>>>>> count
>>>>>> when adding a new disk to vm(as it is chosen from pool),
>>>>>> the def create() delete() in vmstorages.py needs to change.
>>>>>> Though path is stored rather than pool in vm xml, we did a path
query
>>>>>> for all existent disk, to guarantee the right ref count is
recorded.
>>>>>> So things remaining are updating this ref count when volume's
deleted
>>>>>> from and added to vm.
>>>>> I'm not sure I follow. It sounds like you're explaining how
things
>>>>> worked before my patch, but also making suggestions for what else I
>>>>> might change? Can you clarify?
>>>> I mean if you find we cannot use 'volume' attrib in all distro,
what
>>>> changes do we need to fix ref_cnt thing in current "path"
based
>>>> solution.
>>> Oh. Hm. I looked up some distros to see what they ship with.
>>>
>>> RHEL 6.0: libvirt-0.8.1-27
>>> RHEL 6.1: libvirt-0.8.7-18
>>> RHEL 6.2: libvirt-0.9.4-23
>>> RHEL 6.3: libvirt-0.9.10-21
>>> RHEL 6.4: libvirt-0.10.2-18
>>> RHEL/CentOS 6.5: libvirt-0.10.2-29
>>>
>>> RHEL/CentOS 7.0: libvirt-1.1.1-29
>>>
>>> Fedora 18: libvirt-0.10.2.2-3
>>> Fedora 19: libvirt-1.0.5.1-1
>>> Fedora 20: llibvirt-1.1.3.1-2
>>>
>>> SLES 11 SP2: libvirt-0.9.6-0.13.18
>>> SLES 11 SP3: libvirt-1.0.5.1-0.7.10
>>>
>>> OpenSuSE 12.2: libvirt-0.9.11.4-1.2.1
>>> OpenSuSE 12.3: libvirt-1.0.2-1.3.1
>>>
>>>
>>> OpenSuSE 13.1: libvirt-1.1.2-2.5.1
>>>
>>>
>>> So, it looks like this isn't possible at all right now, since it would
>>> effectively break all of RHEL6. That's a huge bummer because I like the
>>> idea of changing to volume and keeping track of which pool a disk is in.
>>>
>>> Without that, we'll have to change the way the disk is stored in the
>>> object store, instead of changing how we look it up. The only way I see
>>> is to store the full path instead of pool:image_name. Then (if we do
>>> need the pool) we'll have to query the list of pools and find which
>>> pool's path matches the image's base path. Storing the whole path
has
>>> the added benefit of making sure that any disks not in a kimchi pool are
>>> also stored, and not added to any VM twice.
>>>
>>> What do you think?
>> Sounds reasonable! Aline, would you pls add this to our release plan?
> Done!
>
> I added a task with this patch title (Prevent disks from being added
> more than once to guest)
> to
https://github.com/kimchi-project/kimchi/wiki/Todo-1.4 and assigned
> to Christy.
>
>>> Thanks!
>>>
>>> - Christy
>>>
>>>>>>> To fix this, this patch changes the way that disks are added
when
>>>>>>> created in libvirt storage pools. Instead of files, they are
treated
>>>>>>> as volumes in pools.
>>>>>>>
>>>>>>> Note: Because of this modification, a small inconsistency in
the
>>>>>>> API,
>>>>>>> which states that the pool for a vm's
"storages" is returned, is
>>>>>>> corrected. The pool is returned as an empty string if the
disk or
>>>>>>> volume was not create from a pool (e.g. a remote ISO).
>>>>>>>
>>>>>>> Signed-off-by: Christy Perez
<christy(a)linux.vnet.ibm.com>
>>>>>>> ---
>>>>>>> src/kimchi/model/storagevolumes.py | 13 ++++++--
>>>>>>> src/kimchi/model/vmstorages.py | 68
>>>>>>> +++++++++++++++++++++++++++++++++++---
>>>>>>> src/kimchi/vmdisks.py | 33
++++++++++++++----
>>>>>>> src/kimchi/vmtemplate.py | 41
++++++++++++++++-------
>>>>>>> 4 files changed, 129 insertions(+), 26 deletions(-)
>>>>>>>
>>>>>>> diff --git a/src/kimchi/model/storagevolumes.py
>>>>>>> b/src/kimchi/model/storagevolumes.py
>>>>>>> index f9f226f..af97748 100644
>>>>>>> --- a/src/kimchi/model/storagevolumes.py
>>>>>>> +++ b/src/kimchi/model/storagevolumes.py
>>>>>>> @@ -249,19 +249,26 @@ def _get_ref_cnt(self, pool, name,
path):
>>>>>>> try:
>>>>>>> ref_cnt =
session.get('storagevolume',
>>>>>>> vol_id)['ref_cnt']
>>>>>>> except NotFoundError:
>>>>>>> + kimchi_log.info('Volume %s not found
in obj
>>>>>>> store.' % vol_id)
>>>>>>> # Fix storage volume created outside
kimchi
>>>>>>> scope
>>>>>>> ref_cnt = 0
>>>>>>> - # try to find this volume in exsisted
vm
>>>>>>> + # try to find this volume in exsisting
vm
>>>>>>> vms = VMsModel.get_vms(self.conn)
>>>>>>> for vm in vms:
>>>>>>> dom = VMModel.get_vm(vm,
self.conn)
>>>>>>> storages =
get_vm_disk_list(dom)
>>>>>>> for disk in storages:
>>>>>>> - d_info = get_vm_disk(dom, disk)
>>>>>>> + d_info = get_vm_disk(dom, disk,
>>>>>>> self.conn)
>>>>>>> if path ==
d_info['path']:
>>>>>>> ref_cnt = ref_cnt + 1
>>>>>>> - session.store('storagevolume',
vol_id,
>>>>>>> + try:
>>>>>>> +
session.store('storagevolume', vol_id,
>>>>>>> {'ref_cnt':
ref_cnt})
>>>>>>> + except Exception as e:
>>>>>>> + # Just log an error if the obj store
fails:
>>>>>>> + kimchi_log.warning('Unable to
store storage
>>>>>>> volume id in '
>>>>>>> + 'objectstore due error:
%s', e.message)
>>>>>>> +
>>>>>>> except Exception as e:
>>>>>>> # This exception is going to catch errors
>>>>>>> returned by
>>>>>>> 'with',
>>>>>>> # specially ones generated by
'session.store'. It is
>>>>>>> outside
>>>>>>> diff --git a/src/kimchi/model/vmstorages.py
>>>>>>> b/src/kimchi/model/vmstorages.py
>>>>>>> index 4d6c0b2..1b2c0fe 100644
>>>>>>> --- a/src/kimchi/model/vmstorages.py
>>>>>>> +++ b/src/kimchi/model/vmstorages.py
>>>>>>> @@ -32,7 +32,7 @@
>>>>>>> from kimchi.model.vms import DOM_STATE_MAP, VMModel
>>>>>>> from kimchi.model.storagevolumes import
StorageVolumeModel
>>>>>>> from kimchi.model.utils import get_vm_config_flag
>>>>>>> -from kimchi.utils import check_url_path
>>>>>>> +from kimchi.utils import check_url_path, kimchi_log
>>>>>>> from kimchi.osinfo import lookup
>>>>>>> from kimchi.vmdisks import get_device_xml, get_vm_disk,
>>>>>>> get_vm_disk_list
>>>>>>> from kimchi.vmdisks import DEV_TYPE_SRC_ATTR_MAP
>>>>>>> @@ -73,6 +73,11 @@ def _get_storage_xml(params,
>>>>>>> ignore_source=False):
>>>>>>> source = E.source(protocol=output.scheme,
>>>>>>> name=output.path)
>>>>>>> source.append(host)
>>>>>>> disk.append(source)
>>>>>>> + elif src_type == 'volume':
>>>>>>> + source = E.source()
>>>>>>> + source.set('pool',
params.get('pool'))
>>>>>>> + source.set('volume',
params.get('vol'))
>>>>>>> + disk.append(source)
>>>>>>> else:
>>>>>>> # Fixing source attribute
>>>>>>> source = E.source()
>>>>>>> @@ -138,6 +143,8 @@ def create(self, vm_name, params):
>>>>>>> # Path will never be blank due to API.json
verification.
>>>>>>> # There is no need to cover this case here.
>>>>>>> params['format'] = 'raw'
>>>>>>> + vol_info = None
>>>>>>> + vol_store_name = None
>>>>>>> if not ('vol' in params) ^
('path' in params):
>>>>>>> raise
InvalidParameter("KCHVMSTOR0017E")
>>>>>>> if params.get('vol'):
>>>>>>> @@ -146,6 +153,7 @@ def create(self, vm_name, params):
>>>>>>> vol_info = StorageVolumeModel(
>>>>>>> conn=self.conn,
>>>>>>> objstore=self.objstore).lookup(pool,
>>>>>>> params['vol'])
>>>>>>> + vol_store_name = '%s:%s' % (pool,
params['vol'])
>>>>>>> except KeyError:
>>>>>>> raise
InvalidParameter("KCHVMSTOR0012E")
>>>>>>> except Exception as e:
>>>>>>> @@ -158,6 +166,10 @@ def create(self, vm_name, params):
>>>>>>> params['format'] =
vol_info['format']
>>>>>>> params['path'] =
vol_info['path']
>>>>>>> params['src_type'] =
_check_path(params['path'])
>>>>>>> + if params['src_type'] is 'file' and
params.get('pool')
>>>>>>> and \
>>>>>> I think if it is 'block' probably we still want to use
'volume'
>>>>> I'll look into that. Thanks.
>>>>>
>>>>>>> + params.get('vol'):
>>>>>>> + params['src_type'] = 'volume'
>>>>>>> + # change from 'file' to volume if in a pool
^^
>>>>>>> if (params['bus'] not in HOTPLUG_TYPE
>>>>>>> and DOM_STATE_MAP[dom.info()[0]] !=
'shutoff'):
>>>>>>> raise
InvalidOperation('KCHVMSTOR0011E')
>>>>>>> @@ -171,6 +183,16 @@ def create(self, vm_name, params):
>>>>>>> dom.attachDeviceFlags(dev_xml,
>>>>>>> get_vm_config_flag(dom,
>>>>>>> 'all'))
>>>>>>> except Exception as e:
>>>>>>> raise
OperationFailed("KCHVMSTOR0008E", {'error':
>>>>>>> e.message})
>>>>>>> +
>>>>>>> + try:
>>>>>> I suggest we wrap a helper function to update ref_cnt rather
than
>>>>>> spread
>>>>>> this code everywhere.
>>>>> Same. Thanks.
>>>>>
>>>>>>> + new_ref_cnt = vol_info['ref_cnt'] + 1
>>>>>>> + with self.objstore as session:
>>>>>>> + session.store('storagevolume',
vol_store_name,
>>>>>>> {'ref_cnt': new_ref_cnt})
>>>>>>> + except Exception as e:
>>>>>>> + # Just log an error if the obj store fails:
>>>>>>> + kimchi_log.warning('Unable to store storage
volume
>>>>>>> id in '
>>>>>>> + 'objectstore due error:
%s',
>>>>>>> e.message)
>>>>>>> +
>>>>>>> return params['dev']
>>>>>>>
>>>>>>> def _get_storage_device_name(self, vm_name,
params):
>>>>>>> @@ -192,25 +214,41 @@ def get_list(self, vm_name):
>>>>>>> dom = VMModel.get_vm(vm_name, self.conn)
>>>>>>> return get_vm_disk_list(dom)
>>>>>>>
>>>>>>> -
>>>>>>> class VMStorageModel(object):
>>>>>>> def __init__(self, **kargs):
>>>>>>> self.conn = kargs['conn']
>>>>>>> + self.objstore = kargs['objstore']
>>>>>>>
>>>>>>> def lookup(self, vm_name, dev_name):
>>>>>>> # Retrieve disk xml and format return dict
>>>>>>> dom = VMModel.get_vm(vm_name, self.conn)
>>>>>>> - return get_vm_disk(dom, dev_name)
>>>>>>> + disk = get_vm_disk(dom, dev_name, self.conn)
>>>>>>> + # Also include the pool and volume
>>>>>>> + pool_name = ''
>>>>>>> + vol_name = ''
>>>>>>> + try:
>>>>>>> + pool_name = disk.get('pool')
>>>>>>> + vol_name = disk.get('volume')
>>>>>>> + except Exception as e:
>>>>>>> + kimchi_log.warning('VMDisk lookup error:
%s' %
>>>>>>> e.message)
>>>>>>> +
>>>>>>> + disk['pool'] = pool_name
>>>>>>> + disk['volume'] = vol_name
>>>>>>> + return disk
>>>>>>>
>>>>>>> def delete(self, vm_name, dev_name):
>>>>>>> # Get storage device xml
>>>>>>> dom = VMModel.get_vm(vm_name, self.conn)
>>>>>>> + disk_info = None
>>>>>>> + disk_xml = None
>>>>>>> + pool_name = ''
>>>>>>> + vol_name = ''
>>>>>>> try:
>>>>>>> - bus_type = self.lookup(vm_name,
dev_name)['bus']
>>>>>>> + disk_info = self.lookup(vm_name, dev_name)
>>>>>>> + bus_type = disk_info['bus']
>>>>>>> except NotFoundError:
>>>>>>> raise
>>>>>>>
>>>>>>> - dom = VMModel.get_vm(vm_name, self.conn)
>>>>>>> if (bus_type not in HOTPLUG_TYPE and
>>>>>>> DOM_STATE_MAP[dom.info()[0]] !=
'shutoff'):
>>>>>>> raise
InvalidOperation('KCHVMSTOR0011E')
>>>>>>> @@ -224,10 +262,30 @@ def delete(self, vm_name, dev_name):
>>>>>>> except Exception as e:
>>>>>>> raise
OperationFailed("KCHVMSTOR0010E", {'error':
>>>>>>> e.message})
>>>>>>>
>>>>>>> + # Get the pool and vol of a disk so we can
>>>>>>> + # decrement its ref_cnt in the obj store
>>>>>>> + pool_name = disk_info.get('pool')
>>>>>>> + vol_name = disk_info.get('volume')
>>>>>>> +
>>>>>>> + if(pool_name and vol_name):
>>>>>>> + try:
>>>>>>> + vol_id = '%s:%s' % (pool_name,
vol_name)
>>>>>>> + with self.objstore as session:
>>>>>>> + vol =
session.get('storagevolume', vol_id)
>>>>>>> + if vol:
>>>>>>> + ref_cnt = vol['ref_cnt']
>>>>>>> + if ref_cnt > 0:
>>>>>>> + ref_cnt = ref_cnt - 1
>>>>>>> + session.store('storagevolume',
vol_id,
>>>>>>> {'ref_cnt': ref_cnt})
>>>>>>> + except Exception as e:
>>>>>>> + kimchi_log.warning('Unable to decrement
volume ref
>>>>>>> count due to %s.'
>>>>>>> + % e.message)
>>>>>>> +
>>>>>>> def update(self, vm_name, dev_name, params):
>>>>>>> path = params.get('path')
>>>>>>> if path and len(path) != 0:
>>>>>>> params['src_type'] =
_check_path(path)
>>>>>>> + # change from 'file' to 'volume'
if pool ^^
>>>>>>> ignore_source = False
>>>>>>> else:
>>>>>>> params['src_type'] = 'file'
>>>>>>> diff --git a/src/kimchi/vmdisks.py b/src/kimchi/vmdisks.py
>>>>>>> index 6012ada..185a499 100644
>>>>>>> --- a/src/kimchi/vmdisks.py
>>>>>>> +++ b/src/kimchi/vmdisks.py
>>>>>>> @@ -17,6 +17,8 @@
>>>>>>> # License along with this library; if not, write to the
Free
>>>>>>> Software
>>>>>>> # Foundation, Inc., 51 Franklin Street, Fifth Floor,
Boston, MA
>>>>>>> 02110-1301 USA
>>>>>>>
>>>>>>> +import libvirt
>>>>>>> +import os
>>>>>>> from lxml import objectify
>>>>>>>
>>>>>>> from kimchi.exception import NotFoundError
>>>>>>> @@ -34,37 +36,56 @@ def get_device_xml(dom, dev_name):
>>>>>>> return None
>>>>>>> return disk[0]
>>>>>>>
>>>>>>> +def get_pool_xml(conn, pool_name):
>>>>>>> + conn = conn.get()
>>>>>>> + return objectify.fromstring(conn.\
>>>>>>> + storagePoolLookupByName(pool_name).XMLDesc(0))
>>>>>>>
>>>>>>> -def get_vm_disk(dom, dev_name):
>>>>>>> +def get_vm_disk(dom, dev_name, conn):
>>>>>>> # Retrieve disk xml and format return dict
>>>>>>> disk = get_device_xml(dom, dev_name)
>>>>>>> if disk is None:
>>>>>>> raise NotFoundError(
>>>>>>> "KCHVMSTOR0007E",
>>>>>>> {'dev_name': dev_name,
'vm_name': dom.name()})
>>>>>>> - path = ""
>>>>>>> + path = ''
>>>>>>> dev_bus = 'ide'
>>>>>>> + pool_name = ''
>>>>>>> + vol_name = ''
>>>>>>> try:
>>>>>>> source = disk.source
>>>>>>> if source is not None:
>>>>>>> - src_type = disk.attrib['type']
>>>>>>> + src_type = disk.get('type')
>>>>>>> if src_type == 'network':
>>>>>>> host = source.host
>>>>>>> path =
(source.attrib['protocol'] + '://' +
>>>>>>> host.attrib['name'] +
':' +
>>>>>>> host.attrib['port'] +
>>>>>>> source.attrib['name'])
>>>>>>> + elif src_type == 'volume':
>>>>>>> + pool_name = disk.source.get('pool')
>>>>>>> + pool = get_pool_xml(conn, pool_name)
>>>>>>> + pool_path = str(pool.target.path)
>>>>>>> + vol_name =
disk.source.attrib['volume']
>>>>>>> + path = os.path.join(pool_path, vol_name)
>>>>>>> else:
>>>>>>> path =
>>>>>>> source.attrib[DEV_TYPE_SRC_ATTR_MAP[src_type]]
>>>>>>> # Retrieve storage bus type
>>>>>>> dev_bus = disk.target.attrib['bus']
>>>>>>> - except:
>>>>>>> - pass
>>>>>>> + except Exception as e:
>>>>>>> + kimchi_log.warning("get_vm_disk exception: %s
" %
>>>>>>> e.message)
>>>>>>> +
>>>>>>> dev_type = disk.attrib['device']
>>>>>>> +
>>>>>>> + # Pool and volume may be empty strings, if the user
passed a
>>>>>>> disk
>>>>>>> name.
>>>>>>> + # If the disk was from a storage pool, let's use
that so we can
>>>>>>> keep track
>>>>>>> + # of its ref_cnt in the objects store.
>>>>>>> return {'dev': dev_name,
>>>>>>> 'type': dev_type,
>>>>>>> 'path': path,
>>>>>>> 'format':
disk.driver.attrib['type'],
>>>>>>> - 'bus': dev_bus}
>>>>>>> + 'bus': dev_bus,
>>>>>>> + 'pool': pool_name,
>>>>>>> + 'volume': vol_name}
>>>>>>>
>>>>>>>
>>>>>>> def get_vm_disk_list(dom):
>>>>>>> diff --git a/src/kimchi/vmtemplate.py
b/src/kimchi/vmtemplate.py
>>>>>>> index 5f22db9..1940911 100644
>>>>>>> --- a/src/kimchi/vmtemplate.py
>>>>>>> +++ b/src/kimchi/vmtemplate.py
>>>>>>> @@ -187,24 +187,41 @@ def _get_cdrom_xml(self,
>>>>>>> libvirt_stream_protocols, qemu_stream_dns):
>>>>>>> return network_file % (params)
>>>>>>>
>>>>>>> def _get_disks_xml(self, vm_uuid):
>>>>>>> - storage_path = self._get_storage_path()
>>>>>>> + def TYPE(v):
>>>>>>> + return {'type': v}
>>>>>>> +
>>>>>>> + #storage_path = self._get_storage_path()
>>>>>>> + _pool =
pool_name_from_uri(self.info['storagepool'])
>>>>>>> ret = ""
>>>>>>> for i, disk in
enumerate(self.info['disks']):
>>>>>>> index = disk.get('index', i)
>>>>>>> - volume = "%s-%s.img" % (vm_uuid,
index)
>>>>>>> - src = os.path.join(storage_path, volume)
>>>>>>> + _volume = "%s-%s.img" % (vm_uuid,
index)
>>>>>>> + #src = os.path.join(storage_path, volume)
>>>>>>> dev = "%s%s" %
>>>>>>> (self._bus_to_dev[self.info['disk_bus']],
>>>>>>> string.lowercase[index])
>>>>>>> fmt = 'raw' if
self._get_storage_type() in
>>>>>>> ['logical']
>>>>>>> else 'qcow2'
>>>>>>> - params = {'src': src, 'dev':
dev, 'bus':
>>>>>>> self.info['disk_bus'],
>>>>>>> - 'type': fmt}
>>>>>>> - ret += """
>>>>>>> - <disk type='file'
device='disk'>
>>>>>>> - <driver name='qemu'
type='%(type)s' cache='none'/>
>>>>>>> - <source file='%(src)s' />
>>>>>>> - <target dev='%(dev)s'
bus='%(bus)s' />
>>>>>>> - </disk>
>>>>>>> - """ % params
>>>>>>> + #params = {'src': src, 'dev':
dev, 'bus':
>>>>>>> self.info['disk_bus'],
>>>>>>> + # 'type': fmt}
>>>>>>> + #ret += """
>>>>>>> + #<disk type='file'
device='disk'>
>>>>>>> + # <driver name='qemu'
type='%(type)s' cache='none'/>
>>>>>>> + # <source file='%(src)s' />
>>>>>>> + # <target dev='%(dev)s'
bus='%(bus)s' />
>>>>>>> + #</disk>
>>>>>>> + #""" % params
>>>>>>> +
>>>>>>> + # Instead do:
>>>>>>> + #<disk type='volume'>
>>>>>>> + # <driver name='qemu'
type='%(type)s' cache='none'/>
>>>>>>> + # <source pool='%(pool)s'
volume='%(volume)s' />
>>>>>>> + # <target dev='%(dev)s'
bus='%(bus)s />
>>>>>>> + #</disk>
>>>>>>> + disk = E.disk(
>>>>>>> + E.driver(TYPE(fmt), name='qemu',
cache='none'),
>>>>>>> + E.source(pool=_pool, volume=_volume),
>>>>>>> + E.target(dev=dev,
bus=self.info['disk_bus']),
>>>>>>> + TYPE('volume'))
>>>>>>> + ret += etree.tostring(disk)
>>>>>>> return ret
>>>>>>>
>>>>>>> def _get_graphics_xml(self, params):