
On 2014年02月28日 09:53, Aline Manera wrote:
On 02/27/2014 12:21 AM, Royce Lv wrote:
On 2014年02月25日 18:29, Sheldon wrote:
volume ref_cnt is useful for storage pool integrity verification.
But we should be care for ref_cnt.
some questions about this ref_cnt. only when all volume ref_cnt of a storage pool(all types of pool) is zero 1. we can delete this storage pool, for all types of pool, right? 2. we can deactivate this storage pool, for all types of pool, right? Right for me. I think I can think examples such as nfs pool, the nfs mount take effect on activate and umount on deactivate. So if some vm volume is in such pool, deactivate and delete pool will make the volume inaccessible.
when we decrease ref_cnt? 1. VM is deleted? the ref_cnt of attached volume will decrease?
ACK.
2. VM is deleted outside kimchi scope, we should decrease the ref_cnt? Since we count the ref_cnt when kimchi started, I assume we can regard that when kimchi starts to take control, there is no other "brain" controls vms. What do you think? So in this way we do not care about delete outside kimchi scope, as this will not happen after kimchi started. you increase storage volume ref_cn when a vm is created outside kimchi scope. I haven't increased, I just count the ref_cnt when encounter the volume outside kimchi scope when first lookup. 3. A volume detached from a vm inside or outside kimchi scope. Same as the above
and now, the kimchi can avoid the ref_cnt race right? when a request get ref_cnt, and another request is decrease or increase ref_cnt at the same time? I think objstore make sure there will be no race.
Unless the objstore has a lock, the race condition will still exist using it see objectstore.py:
class ObjectStore(object): def __init__(self, location=None): self._lock = threading.Semaphore()--->here is semaphore self._connections = OrderedDict() self.location = location or config.get_object_store() with self._lock: self._init_db() def __enter__(self): self._lock.acquire() return ObjectStoreSession(self._get_conn()) def __exit__(self, type, value, tb): self._lock.release() Every time we write: with self.objstore as session: we get the lock when entering and release the lock when exit. So according to me, there will be no race to use objectstore.
from your code:
+ vms = VMsModel(**args).get_list()
which is better ? VMsModel(**args).get_list() or self.vms_get_list()
This is a good point. I think "self.vms_get_list" is a better way to refer it, I will have a try.
On 02/25/2014 04:35 PM, lvroyce@linux.vnet.ibm.com wrote:
From: Royce Lv <lvroyce@linux.vnet.ibm.com>
Set ref_cnt to 0 when new volume created, if the ref_cnt cannot be found, which means it is created outside of kimchi scope, report it when lookup storage volume, search info about storage volumes used by vms and count the ref_cnt.
Signed-off-by: Royce Lv <lvroyce@linux.vnet.ibm.com> --- src/kimchi/mockmodel.py | 3 +++ src/kimchi/model/storagevolumes.py | 31 +++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+)
diff --git a/src/kimchi/mockmodel.py b/src/kimchi/mockmodel.py index b50bf31..7620d38 100644 --- a/src/kimchi/mockmodel.py +++ b/src/kimchi/mockmodel.py @@ -423,6 +423,7 @@ class MockModel(object): name = params['name'] volume = MockStorageVolume(pool, name, params) volume.info['type'] = params['type'] + volume.info['ref_cnt'] = params.get('ref_cnt', 0) volume.info['format'] = params['format'] volume.info['path'] = os.path.join( pool.info['path'], name) @@ -884,6 +885,7 @@ class MockVMTemplate(VMTemplate): disk_paths = [] for vol_info in volumes: vol_info['capacity'] = vol_info['capacity'] << 10 + vol_info['ref_cnt'] = 1 self.model.storagevolumes_create(pool.name, vol_info) disk_paths.append({'pool': pool.name, 'volume': vol_info['name']}) return disk_paths @@ -1002,6 +1004,7 @@ class MockStorageVolume(object): 'capacity': capacity << 20, 'allocation': params.get('allocation', '512'), 'path': params.get('path'), + 'ref_cnt': params.get('ref_cnt'), 'format': fmt} if fmt == 'iso': self.info['allocation'] = self.info['capacity'] diff --git a/src/kimchi/model/storagevolumes.py b/src/kimchi/model/storagevolumes.py index 20c65b9..9906f33 100644 --- a/src/kimchi/model/storagevolumes.py +++ b/src/kimchi/model/storagevolumes.py @@ -29,6 +29,8 @@ from kimchi.exception import InvalidOperation, IsoFormatError from kimchi.exception import MissingParameter, NotFoundError, OperationFailed from kimchi.isoinfo import IsoImage from kimchi.model.storagepools import StoragePoolModel +from kimchi.model.vms import VMsModel +from kimchi.model.vmstorages import VMStoragesModel, VMStorageModel
VOLUME_TYPE_MAP = {0: 'file', @@ -40,6 +42,7 @@ VOLUME_TYPE_MAP = {0: 'file', class StorageVolumesModel(object): def __init__(self, **kargs): self.conn = kargs['conn'] + self.objstore = kargs['objstore']
def create(self, pool, params): vol_xml = """ @@ -58,6 +61,7 @@ class StorageVolumesModel(object): params.setdefault('format', 'qcow2')
name = params['name'] + vol_id = '%s:%s' % (pool, name) try: pool = StoragePoolModel.get_storagepool(pool, self.conn) xml = vol_xml % params @@ -71,6 +75,10 @@ class StorageVolumesModel(object): raise OperationFailed("KCHVOL0007E", {'name': name, 'pool': pool, 'err': e.get_error_message()}) + + with self.objstore as session: + session.store('storagevolume', vol_id, {'ref_cnt': 0}) + return name
def get_list(self, pool_name): @@ -89,6 +97,7 @@ class StorageVolumesModel(object): class StorageVolumeModel(object): def __init__(self, **kargs): self.conn = kargs['conn'] + self.objstore = kargs['objstore']
def _get_storagevolume(self, pool, name): pool = StoragePoolModel.get_storagepool(pool, self.conn) @@ -103,16 +112,38 @@ class StorageVolumeModel(object): else: raise
+ def _get_ref_cnt(self, pool, name, path): + vol_id = '%s:%s' % (pool, name) + with self.objstore as session: + try: + ref_cnt = session.get('storagevolume', vol_id)['ref_cnt'] + except NotFoundError: + # Fix storage volume created outside kimchi scope + ref_cnt = 0 + args = {'conn': self.conn, 'objstore': self.objstore} + # try to find this volume in exsisted vm + vms = VMsModel(**args).get_list() + for vm in vms: + storages = VMStoragesModel(**args).get_list(vm) + for disk in storages: + if path == VMStorageModel(**args).lookup(vm, disk)['path']: + ref_cnt = ref_cnt + 1 + session.store('storagevolume', vol_id, {'ref_cnt': ref_cnt}) + + return ref_cnt + def lookup(self, pool, name): vol = self._get_storagevolume(pool, name) path = vol.path() info = vol.info() xml = vol.XMLDesc(0) fmt = xmlutils.xpath_get_text(xml, "/volume/target/format/@type")[0] + ref_cnt = self._get_ref_cnt(pool, name, path) res = dict(type=VOLUME_TYPE_MAP[info[0]], capacity=info[1], allocation=info[2], path=path, + ref_cnt=ref_cnt, format=fmt) if fmt == 'iso': if os.path.islink(path):
_______________________________________________ Kimchi-devel mailing list Kimchi-devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/kimchi-devel