On Fri, May 5, 2017 at 9:03 PM, Rogério Ceni Coelho <
rogeriocenicoelho(a)gmail.com> wrote:
Hi Yuval,
Yes, it is, but every time I need to install a new oVirt node i will need
to upgrade ovirt engine at least ?
Of course not. You can upgrade the Engine independently from the hosts.
And i will need to upgrade ovirt nodes that already exist ? I have
20
ovirt nodes .... So, this means a lot of work.
My enviroment are stable with 4.0.5 and i am happy for now. oVirt is an
excellent product. Thanks for that.
For example, this morning i try to put an export import storage domain in
manteinance and an error occur only with my new ovirt node running 4.0.6.1
and yesterday a lost 2 days debbuging another problem with network start
with many vlans on 4.0.6.1 ... :-(
It'd be great if you can start a different thread about it, or file a bug,
with all the details (VDSM and Engine logs attached).
Y.
StorageDomainDoesNotExist: Storage domain does not exist:
(u'f9e051a9-6660-4e49-a3f1-354583501610',)
Thread-12::DEBUG::2017-05-05 10:39:07,473::check::296::
storage.check::(_start_process) START check
'/dev/c58ce4b0-7145-4cd0-900e-eeb99177a7de/metadata'
cmd=['/usr/bin/taskset', '--cpu-list', '0-31',
'/usr/bin/dd',
'if=/dev/c58ce4b0-7145-4cd0-900e-eeb99177a7de/metadata', 'of=/dev/null',
'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
Thread-12::DEBUG::2017-05-05 10:39:07,524::asyncevent::564::storage.asyncevent::(reap)
Process <cpopen.CPopen object at 0x401c790> terminated (count=1)
Thread-12::DEBUG::2017-05-05 10:39:07,525::check::327::
storage.check::(_check_completed) FINISH check
'/dev/c58ce4b0-7145-4cd0-900e-eeb99177a7de/metadata' rc=0
err=bytearray(b'1+0 records in\n1+0 records out\n4096 bytes (4.1 kB)
copied, 0.000292117 s, 14.0 MB/s\n') elapsed=0.06
Thread-12::DEBUG::2017-05-05 10:39:07,886::check::296::
storage.check::(_start_process) START check u'/rhev/data-center/mnt/corot.
rbs.com.br:_u00_oVirt_PRD_ISO__DOMAIN/7b8c9293-f103-401a-
93ac-550981837224/dom_md/metadata' cmd=['/usr/bin/taskset',
'--cpu-list',
'0-31', '/usr/bin/dd', u'if=/rhev/data-center/mnt/corot.rbs.com.br:
_u00_oVirt_PRD_ISO__DOMAIN/7b8c9293-f103-401a-93ac-550981837224/dom_md/metadata',
'of=/dev/null', 'bs=4096', 'count=1', 'iflag=direct']
delay=0.00
Thread-12::DEBUG::2017-05-05 10:39:07,898::check::296::
storage.check::(_start_process) START check u'/rhev/data-center/mnt/
dd6701.bkp.srvr.rbs.net:_data_col1_ovirt__prd/db89d5df-00ac-
4e58-a7e5-e31272f9ea92/dom_md/metadata' cmd=['/usr/bin/taskset',
'--cpu-list', '0-31', '/usr/bin/dd',
u'if=/rhev/data-center/mnt/
dd6701.bkp.srvr.rbs.net:_data_col1_ovirt__prd/db89d5df-00ac-
4e58-a7e5-e31272f9ea92/dom_md/metadata', 'of=/dev/null', 'bs=4096',
'count=1', 'iflag=direct'] delay=0.00
Thread-12::DEBUG::2017-05-05 10:39:07,906::check::327::
storage.check::(_check_completed) FINISH check
u'/rhev/data-center/mnt/corot.rbs.com.br:_u00_oVirt_PRD_ISO_
_DOMAIN/7b8c9293-f103-401a-93ac-550981837224/dom_md/metadata' rc=0
err=bytearray(b'0+1 records in\n0+1 records out\n386 bytes (386 B) copied,
0.00038325 s, 1.0 MB/s\n') elapsed=0.02
Thread-12::DEBUG::2017-05-05 10:39:07,916::check::296::
storage.check::(_start_process) START check u'/rhev/data-center/mnt/vnx01.
srv.srvr.rbs.net:_fs__ovirt_prd__data__domain/fdcf130d-
53b8-4978-8f97-82f364639b4a/dom_md/metadata' cmd=['/usr/bin/taskset',
'--cpu-list', '0-31', '/usr/bin/dd',
u'if=/rhev/data-center/mnt/
vnx01.srv.srvr.rbs.net:_fs__ovirt_prd__data__domain/
fdcf130d-53b8-4978-8f97-82f364639b4a/dom_md/metadata', 'of=/dev/null',
'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
Thread-12::DEBUG::2017-05-05 10:39:07,930::check::327::
storage.check::(_check_completed) FINISH check u'/rhev/data-center/mnt/
dd6701.bkp.srvr.rbs.net:_data_col1_ovirt__prd/db89d5df-00ac-
4e58-a7e5-e31272f9ea92/dom_md/metadata' rc=0 err=bytearray(b'0+1 records
in\n0+1 records out\n360 bytes (360 B) copied, 0.000363751 s, 990 kB/s\n')
elapsed=0.03
Thread-12::DEBUG::2017-05-05 10:39:07,964::asyncevent::564::storage.asyncevent::(reap)
Process <cpopen.CPopen object at 0x3a48950> terminated (count=1)
Thread-12::DEBUG::2017-05-05 10:39:07,964::check::327::
storage.check::(_check_completed) FINISH check
u'/rhev/data-center/mnt/vnx01.srv.srvr.rbs.net:_fs__ovirt_
prd__data__domain/fdcf130d-53b8-4978-8f97-82f364639b4a/dom_md/metadata'
rc=0 err=bytearray(b'0+1 records in\n0+1 records out\n369 bytes (369 B)
copied, 0.000319659 s, 1.2 MB/s\n') elapsed=0.05
Thread-12::DEBUG::2017-05-05 10:39:09,035::check::296::
storage.check::(_start_process) START check
'/dev/1804d02e-7865-4acd-a04f-8200ac2d2b84/metadata'
cmd=['/usr/bin/taskset', '--cpu-list', '0-31',
'/usr/bin/dd',
'if=/dev/1804d02e-7865-4acd-a04f-8200ac2d2b84/metadata', 'of=/dev/null',
'bs=4096', 'count=1', 'iflag=direct'] delay=0.00
Thread-12::DEBUG::2017-05-05 10:39:09,084::asyncevent::564::storage.asyncevent::(reap)
Process <cpopen.CPopen object at 0x383d690> terminated (count=1)
Thread-12::DEBUG::2017-05-05 10:39:09,085::check::327::
storage.check::(_check_completed) FINISH check
'/dev/1804d02e-7865-4acd-a04f-8200ac2d2b84/metadata' rc=0
err=bytearray(b'1+0 records in\n1+0 records out\n4096 bytes (4.1 kB)
copied, 0.000190859 s, 21.5 MB/s\n') elapsed=0.05
Thread-37::DEBUG::2017-05-05 10:39:12,431::monitor::365::
Storage.Monitor::(_produceDomain) Producing domain
f9e051a9-6660-4e49-a3f1-354583501610
Thread-37::ERROR::2017-05-05 10:39:12,431::sdc::140::
Storage.StorageDomainCache::(_findDomain) looking for unfetched domain
f9e051a9-6660-4e49-a3f1-354583501610
Thread-37::ERROR::2017-05-05 10:39:12,431::sdc::157::
Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain
f9e051a9-6660-4e49-a3f1-354583501610
Thread-37::DEBUG::2017-05-05 10:39:12,432::lvm::288::Storage.Misc.excCmd::(cmd)
/usr/bin/taskset --cpu-list 0-31 /usr/bin/sudo -n /usr/sbin/lvm vgs
--config ' devices { preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3
filter = [ '\''a|/dev/mapper/36000144000000010206a1c589de93
966|/dev/mapper/36000144000000010206a1c589de939b1|/dev/mapper/
36000144000000010206a1c589de93a4d|/dev/mapper/
3600601600f7025006828c2f47c7be611|'\'', '\''r|.*|'\''
] } global {
locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0
} backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b
--nosuffix --separator '|' --ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_
count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
f9e051a9-6660-4e49-a3f1-354583501610 (cwd None)
Thread-37::DEBUG::2017-05-05 10:39:12,591::lvm::288::Storage.Misc.excCmd::(cmd)
FAILED: <err> = ' WARNING: Not using lvmetad because config setting
use_lvmetad=0.\n WARNING: To avoid corruption, rescan devices to make
changes visible (pvscan --cache).\n Volume group
"f9e051a9-6660-4e49-a3f1-354583501610"
not found\n Cannot process volume group f9e051a9-6660-4e49-a3f1-354583501610\n';
<rc> = 5
Thread-37::WARNING::2017-05-05 10:39:12,597::lvm::376::Storage.LVM::(_reloadvgs)
lvm vgs failed: 5 [] [' WARNING: Not using lvmetad because config setting
use_lvmetad=0.', ' WARNING: To avoid corruption, rescan devices to make
changes visible (pvscan --cache).', ' Volume group
"f9e051a9-6660-4e49-a3f1-354583501610" not found', ' Cannot process
volume group f9e051a9-6660-4e49-a3f1-354583501610']
*Thread-37::ERROR::2017-05-05
10:39:12,602::sdc::146::Storage.StorageDomainCache::(_findDomain) domain
f9e051a9-6660-4e49-a3f1-354583501610 not found*
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'f9e051a9-6660-4e49-a3f1-354583501610',)
Thread-37::ERROR::2017-05-05 10:39:12,602::monitor::328::Storage.Monitor::(_setupLoop)
Setting up monitor for f9e051a9-6660-4e49-a3f1-354583501610 failed
Traceback (most recent call last):
File "/usr/share/vdsm/storage/monitor.py", line 325, in _setupLoop
self._setupMonitor()
File "/usr/share/vdsm/storage/monitor.py", line 348, in _setupMonitor
self._produceDomain()
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 405, in
wrapper
value = meth(self, *a, **kw)
File "/usr/share/vdsm/storage/monitor.py", line 366, in _produceDomain
self.domain = sdCache.produce(self.sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 101, in produce
domain.getRealDomain()
File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 125, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'f9e051a9-6660-4e49-a3f1-354583501610',)
jsonrpc.Executor/3::DEBUG::2017-05-05 10:39:13,057::__init__::530::
jsonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getAllVmStats' in
bridge with {}
jsonrpc.Executor/3::DEBUG::2017-05-05 10:39:13,081::__init__::555::
jsonrpc.JsonRpcServer::(_handle_request) Return 'Host.getAllVmStats' in
bridge with (suppressed)
jsonrpc.Executor/3::INFO::2017-05-05 10:39:13,088::__init__::513::
jsonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getAllVmStats
succeeded in 0.03 seconds
Em sex, 5 de mai de 2017 às 14:40, Yuval Turgeman <yuvalt(a)redhat.com>
escreveu:
> I take it updating everything to 4.0.6 is not an option ?
>
> Thanks,
> Yuval
>
> On May 5, 2017 6:16 PM, "Rogério Ceni Coelho" <
> rogeriocenicoelho(a)gmail.com> wrote:
>
>> I can think in two ways. Please let me know if have a change to go ok.
>>
>> First, download ovirt-node-ng-image-update and ovirt-node-ng-image from
>> 4.0.5 version and run yum localinstall.
>>
>> Second, create a rpm list from other 4.0.5 node, do an diff agains my
>> 4.0.3 node and use the diff to download packages from 4.0.5 version and run
>> yum localintall.
>>
>> My concern with this are that i can not upgrade ovirt engine every time
>> i need to install a new ovirt node.
>>
>> Thanks.
>>
>> Em sex, 5 de mai de 2017 às 11:26, Rogério Ceni Coelho <
>> rogeriocenicoelho(a)gmail.com> escreveu:
>>
>>> Hi oVirt Troopers,
>>>
>>> I have two segregated oVirt Clusters running 4.0.5 ( DEV and PROD
>>> enviroments).
>>>
>>> Now i need to install a new oVirt Node Server (Dell PowerEdge M620),
>>> but i see that does not exist an 4.0.5 iso on
>>>
http://resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-node-ng-installer/ ,
>>> only 4.0.3 and 4.0.6.
>>>
>>> How can i install this new server and go to the same version of all 20
>>> others ???
>>>
>>> There is a way to install 4.0.3 and update to 4.0.5 only ?
>>>
>>> Thanks in advance.
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
>>
>>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users