About H.264 encode in ovirt
by 张 余歌
--_000_DM5PR11MB193063C3225CAD6EC0EB902C90450DM5PR11MB1930namp_
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
aGVsbG8sbXkgZnJpZW5kLGkgd2FudCB0byBiZSBzdXJlIHdoZXRoZXIgb3ZpcnQgc3VwcG9ydCBo
LjI2NCBlbmNvZGU/RGlkIGFueW9uZSBrbm93IGl0P3RoYW5rcw0K
--_000_DM5PR11MB193063C3225CAD6EC0EB902C90450DM5PR11MB1930namp_
Content-Type: text/html; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dgb2312">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Arial,Helvetica,sans-serif;" dir=3D"ltr">
hello,my friend,i want to be sure whether ovirt support h.264 encode?Did an=
yone know it?thanks</div>
</body>
</html>
--_000_DM5PR11MB193063C3225CAD6EC0EB902C90450DM5PR11MB1930namp_--
7 years, 9 months
Re: [ovirt-users] problem after rebooting the node
by Edward Haas
Based on what I can see, you used OVS as the switch type and it seems ovs
(openvswitch) is not properly installed on your host.
Make sure that you have ovs operational by issuing "ovs-vsctl show".
You should note that OVS network support is not an official release
feature, and you should use it on 4.1 and up versions.
Fixes will be probably submitted to master (appearing in nightly builds).
Next time please include the mailing-list in your replies and attach the
log files, it is less spamming.
Thanks,
Edy.
On Fri, Feb 3, 2017 at 5:07 AM, Shalabh Goel <shalabhgoel13(a)gmail.com>
wrote:
> log from messages
>
> Feb 3 08:27:53 ovirtnode3 ovs-vsctl: ovs|00001|db_ctl_base|ERR|
> unix:/var/run/openvswitch/db.sock: database connection failed (No such
> file or directory)
> Feb 3 08:27:53 ovirtnode3 journal: vdsm vds ERROR Executing commands
> failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database connection
> failed (No su
> ch file or directory)#012Traceback (most recent call last):#012 File
> "/usr/share/vdsm/API.py", line 1531, in setupNetworks#012
> supervdsm.getProxy().setup
> Networks(networks, bondings, options)#012 File "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py",
> line 53, in __call__#012 return callMethod()#012 Fi
> le "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51, in
> <lambda>#012 **kwargs)#012 File "<string>", line 2, in
> setupNetworks#012 File "/usr
> /lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod#012 raise convert_to_error(kind,
> result)#012ConfigNetworkError: (21, 'Executing co
> mmands failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database
> connection failed (No such file or directory)')
>
> Log from vdsm.log
>
> Reactor thread::INFO::2017-02-03 08:32:24,638::protocoldetector::72::
> ProtocolDetector.AcceptorImpl::(handle_accept) Accepted connection from
> ::1:53082
> Reactor thread::DEBUG::2017-02-03 08:32:24,643::protocoldetector::91::
> ProtocolDetector.Detector::(__init__) Using required_size=11
> Reactor thread::INFO::2017-02-03 08:32:24,643::protocoldetector::127::
> ProtocolDetector.Detector::(handle_read) Detected protocol xml from
> ::1:53082
> Reactor thread::DEBUG::2017-02-03 08:32:24,643::bindingxmlrpc::
> 1317::XmlDetector::(handle_socket) xml over http detected from ('::1',
> 53082)
> BindingXMLRPC::INFO::2017-02-03 08:32:24,643::xmlrpc::73::vds.XMLRPCServer::(handle_request)
> Starting request handler for ::1:53082
> Thread-20::INFO::2017-02-03 08:32:24,644::xmlrpc::83::vds.
> XMLRPCServer::(_process_requests) Request handler for ::1:53082 started
> Thread-20::DEBUG::2017-02-03 08:32:24,644::bindingxmlrpc::1263::vds::(wrapper)
> client [::1]::call getAllVmStats with () {}
> Thread-20::DEBUG::2017-02-03 08:32:24,644::bindingxmlrpc::1270::vds::(wrapper)
> return getAllVmStats with {'status': {'message': 'Done', 'code': 0},
> 'statsLis
> t': (suppressed)}
> Thread-20::INFO::2017-02-03 08:32:24,644::bindingxmlrpc::1297::vds::(wrapper)
> RPC call getAllVmStats finished (code=0) in 0.00 seconds
> Thread-20::INFO::2017-02-03 08:32:24,645::xmlrpc::91::vds.
> XMLRPCServer::(_process_requests) Request handler for ::1:53082 stopped
> periodic/0::DEBUG::2017-02-03 08:32:24,907::sampling::508::
> virt.sampling.VMBulkSampler::(__call__) sampled timestamp 4347942.15
> elapsed 0.010 acquired True d
> omains all
> periodic/2::DEBUG::2017-02-03 08:32:24,908::task::599::
> storage.TaskManager.Task::(_updateState) (Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
> moving from sta
> te init -> state preparing
> periodic/2::INFO::2017-02-03 08:32:24,908::logUtils::49::dispatcher::(wrapper)
> Run and protect: repoStats(options=None)
> periodic/2::INFO::2017-02-03 08:32:24,908::logUtils::52::dispatcher::(wrapper)
> Run and protect: repoStats, Return response: {}
> periodic/2::DEBUG::2017-02-03 08:32:24,909::task::1195::
> storage.TaskManager.Task::(prepare) (Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
> finished: {}
> periodic/2::DEBUG::2017-02-03 08:32:24,909::task::599::
> storage.TaskManager.Task::(_updateState) (Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
> moving from sta
> te preparing -> state finished
> periodic/2::DEBUG::2017-02-03 08:32:24,909::resourceManager:
> :910::storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll
> requests {} resources {}
> periodic/2::DEBUG::2017-02-03 08:32:24,909::resourceManager:
> :947::storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests
> {}
> periodic/2::DEBUG::2017-02-03 08:32:24,909::task::997::
> storage.TaskManager.Task::(_decref) (Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
> ref 0 aborting False
> jsonrpc/4::DEBUG::2017-02-03 08:32:26,653::__init__::532::
> jsonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getAllVmStats' in
> bridge with {}
> jsonrpc/4::DEBUG::2017-02-03 08:32:26,653::__init__::557::
> jsonrpc.JsonRpcServer::(_handle_request) Return 'Host.getAllVmStats' in
> bridge with (suppressed)
> jsonrpc/4::INFO::2017-02-03 08:32:26,653::__init__::515::
> jsonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getAllVmStats
> succeeded in 0.00 seconds
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,706::__init__::532::
> jsonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getStats' in
> bridge with {}
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,706::__init__::532::
> jsonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getStats' in
> bridge with {}
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,707::task::599::
> storage.TaskManager.Task::(_updateState) (Task='9285329e-485b-4165-8bdc-4f411f394356')
> moving from state init -> state preparing
> jsonrpc/6::INFO::2017-02-03 08:32:27,707::logUtils::49::dispatcher::(wrapper)
> Run and protect: repoStats(options=None)
> jsonrpc/6::INFO::2017-02-03 08:32:27,707::logUtils::52::dispatcher::(wrapper)
> Run and protect: repoStats, Return response: {}
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,707::task::1195::
> storage.TaskManager.Task::(prepare) (Task='9285329e-485b-4165-8bdc-4f411f394356')
> finished: {}
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,707::task::599::
> storage.TaskManager.Task::(_updateState) (Task='9285329e-485b-4165-8bdc-4f411f394356')
> moving from state preparing -> state finished
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,708::resourceManager:
> :910::storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll
> requests {} resources {}
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,708::resourceManager:
> :947::storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests
> {}
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,708::task::997::
> storage.TaskManager.Task::(_decref) (Task='9285329e-485b-4165-8bdc-4f411f394356')
> ref 0 aborting False
> jsonrpc/6::DEBUG::2017-02-03 08:32:27,712::__init__::557::
> jsonrpc.JsonRpcServer::(_handle_request) Return 'Host.getStats' in bridge
> with {'cpuStatistics': {'28': {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys':
> '0.00', 'cpuIdle': '99.87'}, '29': {'cpuUser': '0.00', 'nodeIndex': 0,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '24': {'cpuUser': '0.00',
> 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '25': {'cpuUser':
> '0.13', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.80'}, '26':
> {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '27': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}, '20': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '21': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
> '0.00', 'cpuIdle': '100.00'}, '22': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '23': {'cpuUser': '0.00',
> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '46': {'cpuUser':
> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '47':
> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '44': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}, '45': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '42': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
> '0.00', 'cpuIdle': '100.00'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '40': {'cpuUser': '0.00',
> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '41': {'cpuUser':
> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '1':
> {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.80'},
> '0': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}, '3': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.07',
> 'cpuIdle': '99.86'}, '2': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys':
> '0.00', 'cpuIdle': '99.93'}, '5': {'cpuUser': '0.00', 'nodeIndex': 0,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '4': {'cpuUser': '0.00',
> 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.93'}, '7': {'cpuUser':
> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '6':
> {'cpuUser': '0.20', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '99.80'},
> '9': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}, '8': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '39': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
> '0.00', 'cpuIdle': '100.00'}, '38': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '11': {'cpuUser': '0.00',
> 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '10': {'cpuUser':
> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '13':
> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '12': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}, '15': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '14': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
> '0.00', 'cpuIdle': '100.00'}, '17': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '16': {'cpuUser': '0.00',
> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '19': {'cpuUser':
> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '18':
> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '31': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}, '30': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '37': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
> '0.00', 'cpuIdle': '100.00'}, '36': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '35': {'cpuUser': '0.00',
> 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '34': {'cpuUser':
> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '33':
> {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '32': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
> '100.00'}}, 'numaNodeMemFree': {'1': {'memPercent': 3, 'memFree': '63780'},
> '0': {'memPercent': 4, 'memFree': '63426'}}, 'memShared': 0, 'thpState':
> 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 0, 'memUsed': '2',
> 'storageDomains': {}, 'incomingVmMigrations': 0, 'network': {'ib0':
> {'txErrors': '0', 'state': 'down', 'sampleTime': 1486090944.903051, 'name':
> 'ib0', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed':
> '1000', 'rxDropped': '0'}, 'lo': {'txErrors': '0', 'state': 'up',
> 'sampleTime': 1486090944.903051, 'name': 'lo', 'tx': '56174388',
> 'txDropped': '0', 'rx': '56174388', 'rxErrors': '0', 'speed': '1000',
> 'rxDropped': '0'}, 'eno2': {'txErrors': '0', 'state': 'down', 'sampleTime':
> 1486090944.903051, 'name': 'eno2', 'tx': '0', 'txDropped': '0', 'rx': '0',
> 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'eno1': {'txErrors':
> '0', 'state': 'up', 'sampleTime': 1486090944.903051, 'name': 'eno1', 'tx':
> '26032823', 'txDropped': '0', 'rx': '174722400', 'rxErrors': '0', 'speed':
> '1000', 'rxDropped': '5624'}, ';vdsmdummy;': {'txErrors': '0', 'state':
> 'down', 'sampleTime': 1486090944.903051, 'name': ';vdsmdummy;', 'tx': '0',
> 'txDropped': '0', 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped':
> '0'}}, 'txDropped': '0', 'anonHugePages': '34', 'ksmPages': 100,
> 'elapsedTime': '287.81', 'cpuLoad': '0.07', 'cpuSys': '0.00', 'diskStats':
> {'/var/log': {'free': '15072'}, '/var/log/core': {'free': '15072'},
> '/var/run/vdsm/': {'free': '64418'}, '/tmp': {'free': '914756'}},
> 'cpuUserVdsmd': '0.13', 'netConfigDirty': 'True', 'memCommitted': 0,
> 'ksmState': False, 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 127200,
> 'bootTime': '1486037670', 'haStats': {'active': False, 'configured': False,
> 'score': 0, 'localMaintenance': False, 'globalMaintenance': False},
> 'momStatus': 'active', 'rxDropped': '5624', 'outgoingVmMigrations': 0,
> 'swapTotal': 4095, 'swapFree': 4095, 'dateTime': '2017-02-03T03:02:27 GMT',
> 'cpuUser': '0.02', 'memFree': 127456, 'cpuIdle': '99.98', 'vmActive': 0,
> 'v2vJobs': {}, 'cpuSysVdsmd': '0.07'}
> jsonrpc/6::INFO::2017-02-03 08:32:27,713::__init__::515::
> jsonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getStats succeeded
> in 0.00 seconds
> Reactor thread::INFO::2017-02-03 08:32:39,661::protocoldetector::72::
> ProtocolDetector.AcceptorImpl::(handle_accept) Accepted connection from
> ::1:53084
>
> supervdsm.log
>
>
> File "/usr/share/vdsm/supervdsmServer", line 95, in wrapper
> res = func(*args, **kwargs)
> File "/usr/share/vdsm/supervdsmServer", line 143, in umount
> timeout=timeout)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 281, in _umount
> _runcmd(cmd, timeout)
> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py", line
> 297, in _runcmd
> raise MountError(rc, ";".join((out, err)))
> MountError: (32, ';umount: /rhev/data-center/mnt/10.1.235.6:_iso:
> mountpoint not found\n')
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:26:59,495::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:26:59,495::commands::69::storage.HBA::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /usr/libexec/vdsm/fc-scan (cwd N
> one)
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:26:59,579::supervdsmServer:
> :100::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
> MainThread::DEBUG::2017-02-03 08:27:26,367::supervdsmServer::311::SuperVdsm.Server::(main)
> Terminated normally
> MainThread::DEBUG::2017-02-03 08:27:36,148::__init__::47::
> blivet::(register_device_format) registered device format class
> DeviceFormat as None
> MainThread::DEBUG::2017-02-03 08:27:36,149::__init__::47::
> blivet::(register_device_format) registered device format class BIOSBoot
> as biosboot
> MainThread::DEBUG::2017-02-03 08:27:36,150::__init__::47::
> blivet::(register_device_format) registered device format class DiskLabel
> as disklabel
> MainThread::DEBUG::2017-02-03 08:27:36,150::__init__::47::
> blivet::(register_device_format) registered device format class
> DMRaidMember as dmraidmember
> MainThread::DEBUG::2017-02-03 08:27:36,151::__init__::47::
> blivet::(register_device_format) registered device format class Ext2FS as
> ext2
> MainThread::DEBUG::2017-02-03 08:27:36,152::__init__::47::
> blivet::(register_device_format) registered device format class Ext3FS as
> ext3
> MainThread::DEBUG::2017-02-03 08:27:36,152::__init__::47::
> blivet::(register_device_format) registered device format class Ext4FS as
> ext4
> MainThread::DEBUG::2017-02-03 08:27:36,152::__init__::47::
> blivet::(register_device_format) registered device format class FATFS as
> vfat
> MainThread::DEBUG::2017-02-03 08:27:36,152::__init__::47::
> blivet::(register_device_format) registered device format class EFIFS as
> efi
> MainThread::DEBUG::2017-02-03 08:27:36,152::__init__::47::
> blivet::(register_device_format) registered device format class BTRFS as
> btrfs
> MainThread::DEBUG::2017-02-03 08:27:36,153::__init__::47::
> blivet::(register_device_format) registered device format class GFS2 as
> gfs2
> MainThread::DEBUG::2017-02-03 08:27:36,153::__init__::47::
> blivet::(register_device_format) registered device format class JFS as jfs
> MainThread::DEBUG::2017-02-03 08:27:36,153::__init__::47::
> blivet::(register_device_format) registered device format class ReiserFS
> as reiserfs
> MainThread::DEBUG::2017-02-03 08:27:36,153::__init__::47::
> blivet::(register_device_format) registered device format class XFS as xfs
> MainThread::DEBUG::2017-02-03 08:27:36,153::__init__::47::
> blivet::(register_device_format) registered device format class HFS as hfs
> MainThread::DEBUG::2017-02-03 08:27:36,153::__init__::47::
> blivet::(register_device_format) registered device format class
> AppleBootstrapFS as appleboot
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class HFSPlus
> as hfs+
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class MacEFIFS
> as macefi
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class NTFS as
> ntfs
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class NFS as nfs
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class NFSv4 as
> nfs4
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class Iso9660FS
> as iso9660
> MainThread::DEBUG::2017-02-03 08:27:36,154::__init__::47::
> blivet::(register_device_format) registered device format class NoDevFS
> as nodev
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class DevPtsFS
> as devpts
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class ProcFS as
> proc
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class SysFS as
> sysfs
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class TmpFS as
> tmpfs
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class BindFS as
> bind
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class SELinuxFS
> as selinuxfs
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class USBFS as
> usbfs
> MainThread::DEBUG::2017-02-03 08:27:36,155::__init__::47::
> blivet::(register_device_format) registered device format class EFIVarFS
> as efivarfs
> MainThread::DEBUG::2017-02-03 08:27:36,179::__init__::47::
> blivet::(register_device_format) registered device format class LUKS as
> luks
> MainThread::DEBUG::2017-02-03 08:27:36,181::__init__::47::
> blivet::(register_device_format) registered device format class
> LVMPhysicalVolume as lvmpv
> MainThread::DEBUG::2017-02-03 08:27:36,181::__init__::47::
> blivet::(register_device_format) registered device format class
> MDRaidMember as mdmember
> MainThread::DEBUG::2017-02-03 08:27:36,181::__init__::47::
> blivet::(register_device_format) registered device format class
> MultipathMember as multipath_member
> MainThread::DEBUG::2017-02-03 08:27:36,181::__init__::47::
> blivet::(register_device_format) registered device format class
> PPCPRePBoot as prepboot
> MainThread::DEBUG::2017-02-03 08:27:36,182::__init__::47::
> blivet::(register_device_format) registered device format class SwapSpace
> as swap
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::69::blivet::(log_exception_info)
> IGNORED: Caught exception, continuing.
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::72::blivet::(log_exception_info)
> IGNORED: Problem description: failed to get initiator name from
> iscsi firmware
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::73::blivet::(log_exception_info)
> IGNORED: Begin exception details.
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::76::blivet::(log_exception_info)
> IGNORED: Traceback (most recent call last):
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::76::blivet::(log_exception_info)
> IGNORED: File "/usr/lib/python2.7/site-packages/blivet
> /iscsi.py", line 146, in __init__
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::76::blivet::(log_exception_info)
> IGNORED: initiatorname = libiscsi.get_firmware_initi
> ator_name()
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::76::blivet::(log_exception_info)
> IGNORED: IOError: Unknown error
> MainThread::DEBUG::2017-02-03 08:27:36,216::storage_log::77::blivet::(log_exception_info)
> IGNORED: End exception details.
> MainThread::DEBUG::2017-02-03 08:27:36,227::supervdsmServer::271::SuperVdsm.Server::(main)
> Making sure I'm root - SuperVdsm
> MainThread::DEBUG::2017-02-03 08:27:36,227::supervdsmServer::280::SuperVdsm.Server::(main)
> Parsing cmd args
> MainThread::DEBUG::2017-02-03 08:27:36,227::supervdsmServer::283::SuperVdsm.Server::(main)
> Cleaning old socket /var/run/vdsm/svdsm.sock
> MainThread::DEBUG::2017-02-03 08:27:36,227::supervdsmServer::287::SuperVdsm.Server::(main)
> Setting up keep alive thread
> MainThread::DEBUG::2017-02-03 08:27:36,228::supervdsmServer::293::SuperVdsm.Server::(main)
> Creating remote object manager
> MainThread::DEBUG::2017-02-03 08:27:36,228::fileUtils::246::storage.fileUtils::(chown)
> Changing owner for /var/run/vdsm/svdsm.sock, to (36:36)
> MainThread::DEBUG::2017-02-03 08:27:36,228::supervdsmServer::303::SuperVdsm.Server::(main)
> Started serving super vdsm object
> sourceRoute::DEBUG::2017-02-03 08:27:36,229::
> sourceroutethread::79::root::(_subscribeToInotifyLoop) sourceRouteThread.subscribeToInotifyLoop
> started
> restore-net::INFO::2017-02-03 08:27:37,160::vdsm-restore-
> net-config::465::root::(restore) networks already restored. doing nothing.
> MainProcess|hsm/init::DEBUG::2017-02-03 08:27:39,601::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
> MainProcess|hsm/init::DEBUG::2017-02-03 08:27:39,602::commands::69::storage.HBA::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /usr/libexec/vdsm/fc-scan (cwd No
> ne)
> MainProcess|hsm/init::DEBUG::2017-02-03 08:27:39,691::supervdsmServer:
> :100::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,355::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {}
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,360::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /sbin/ip route show to 0.0.0.0/0 table
> main (cwd None)
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,364::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,408::
> libvirtconnection::160::root::(get) trying to connect libvirt
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,450::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /usr/sbin/tc qdisc show (cwd None)
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,453::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,454::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /bin/systemctl --no-pager list-unit-fil
> es (cwd None)
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,530::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,530::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /bin/systemctl status openvswitch.servi
> ce (cwd None)
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,561::commands::93::root::(execCmd)
> FAILED: <err> = ''; <rc> = 3
> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,561::supervdsmServer:
> :100::SuperVdsm.ServerCallback::(wrapper) return network_caps with
> {'bridges': {}, 'bo
> ndings': {}, 'nameservers': ['10.1.1.8'], 'nics': {'ib0': {'ipv6autoconf':
> True, 'addr': '', 'ipv6gateway': '::', 'dhcpv6': False, 'ipv6addrs': [],
> 'mtu': '4
> 092', 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False,
> 'ipv4addrs': [], 'hwaddr': '80:00:02:08:fe:80:00:00:00:
> 00:00:00:0c:c4:7a:ff:ff:0f:d8:85', 's
> peed': 0, 'gateway': ''}, 'eno1': {'ipv6autoconf': False, 'addr':
> '10.1.235.5', 'ipv6gateway': '::', 'dhcpv6': False, 'ipv6addrs': [], 'mtu':
> '1500', 'dhcpv4
> ': False, 'netmask': '255.255.0.0', 'ipv4defaultroute': True, 'ipv4addrs':
> ['10.1.235.5/16'], 'hwaddr': '0c:c4:7a:0f:6b:2a', 'speed': 1000,
> 'gateway': '10.1.
> 0.1'}, 'eno2': {'ipv6autoconf': True, 'addr': '', 'ipv6gateway': '::',
> 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False,
> 'netmask': '', 'ipv4
> defaultroute': False, 'ipv4addrs': [], 'hwaddr': '0c:c4:7a:0f:6b:2b',
> 'speed': 0, 'gateway': ''}}, 'supportsIPv6': True, 'vlans': {}, 'networks':
> {}}
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,690::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({},
> {u'connectivityC
> heck': u'true', u'connectivityTimeout': 120}) {}
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,691::api::247::root::(setupNetworks)
> Setting up network according to configuration: networks:{u'ovirtmgmt':
> {u'ipv6autoconf': False, u'nic': u'eno1', u'ipaddr': u'10.1.235.5',
> u'netmask': u'255.255.0.0', u'mtu': 1500, u'switch': u'ovs', u'dhcpv6':
> False, u'STP': u'no', u'bridged': u'true', u'gateway': u'10.1.0.1',
> u'defaultRoute': True}}, bondings:{}, options:{u'connectivityCheck':
> u'true', u'connectivityTimeout': 120}
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,691::api::252::root::(setupNetworks)
> Validating configuration
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,694::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /sbin/ip route show to 0.0.0.0/0 table
> main (cwd None)
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,698::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,771::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /usr/sbin/tc qdisc show (cwd None)
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,774::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,774::vsctl::57::root::(commit)
> Executing commands: /usr/bin/ovs-vsctl --oneline --format=json -- list
> Bridge -- list Port -- list Interface
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,774::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /usr/bin/ovs-vsctl --oneline --format=json
> -- list Bridge -- list Port -- list Interface (cwd None)
> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,807::commands::93::root::(execCmd)
> FAILED: <err> = 'ovs-vsctl: unix:/var/run/openvswitch/db.sock: database
> connection failed (No such file or directory)\n'; <rc> = 1
> MainProcess|jsonrpc/5::ERROR::2017-02-03 08:27:53,808::supervdsmServer:
> :97::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
> Traceback (most recent call last):
> File "/usr/share/vdsm/supervdsmServer", line 95, in wrapper
> res = func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 254,
> in setupNetworks
> netswitch.validate(networks, bondings)
> File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch.py", line
> 121, in validate
> ovs_switch.validate_network_setup(ovs_nets, ovs_bonds)
> File "/usr/lib/python2.7/site-packages/vdsm/network/ovs/switch.py",
> line 40, in validate_network_setup
> ovs_networks = info.create_netinfo(info.OvsInfo())['networks']
> File "/usr/lib/python2.7/site-packages/vdsm/network/ovs/info.py", line
> 73, in __init__
> ovs_db = OvsDB(driver.create())
> File "/usr/lib/python2.7/site-packages/vdsm/network/ovs/info.py", line
> 64, in __init__
> transaction.add(ifaces_command)
> File "/usr/lib/python2.7/site-packages/vdsm/network/ovs/driver/__init__.py",
> line 54, in __exit__
> self.result = self.commit()
> File "/usr/lib/python2.7/site-packages/vdsm/network/ovs/driver/vsctl.py",
> line 63, in commit
> 'Executing commands failed: %s' % '\n'.join(err))
> ConfigNetworkError: (21, 'Executing commands failed: ovs-vsctl:
> unix:/var/run/openvswitch/db.sock: database connection failed (No such
> file or directory)')
> MainProcess|Thread-2::DEBUG::2017-02-03 08:27:54,440::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call ksmTune with () {}
> MainProcess|Thread-2::DEBUG::2017-02-03 08:27:54,440::supervdsmServer:
> :100::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:00,918::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {}
> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:00,922::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /sbin/ip route show to 0.0.0.0/0 table
> main (cwd None)
> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:00,927::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:01,013::commands::69::root::(execCmd)
> /usr/bin/taskset --cpu-list 0-47 /usr/sbin/tc qdisc show (cwd None)
> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:01,016::commands::93::root::(execCmd)
> SUCCESS: <err> = ''; <rc> = 0
> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:01,017::supervdsmServer:
> :100::SuperVdsm.ServerCallback::(wrapper) return network_caps with
> {'bridges': {}, 'bondings': {}, 'nameservers': ['10.1.1.8'], 'nics':
> {'ib0': {'ipv6autoconf': True, 'addr': '', 'ipv6gateway': '::', 'dhcpv6':
> False, 'ipv6addrs': [], 'mtu': '4092', 'dhcpv4': False, 'netmask': '',
> 'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr':
> '80:00:02:08:fe:80:00:00:00:00:00:00:0c:c4:7a:ff:ff:0f:d8:85', 'speed':
> 0, 'gateway': ''}, 'eno1': {'ipv6autoconf': False, 'addr': '10.1.235.5',
> 'ipv6gateway': '::', 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500',
> 'dhcpv4': False, 'netmask': '255.255.0.0', 'ipv4defaultroute': True,
> 'ipv4addrs': ['10.1.235.5/16'], 'hwaddr': '0c:c4:7a:0f:6b:2a', 'speed':
> 1000, 'gateway': '10.1.0.1'}, 'eno2': {'ipv6autoconf': True, 'addr': '',
> 'ipv6gateway': '::', 'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500',
> 'dhcpv4': False, 'netmask': '', 'ipv4defaultroute': False, 'ipv4addrs': [],
> 'hwaddr': '0c:c4:7a:0f:6b:2b', 'speed': 0, 'gateway': ''}}, 'supportsIPv6':
> True, 'vlans': {}, 'networks': {}}
> MainProcess|jsonrpc/1::DEBUG::2017-02-03 08:30:01,036::supervdsmServer:
> :93::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {}
> MainProcess|jsonrpc/1::DEBUG::2017-02-03 08:30:01,066::supervdsmServer:
> :100::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with
> {'systemProductName': 'X9DRT', 'systemSerialNumber': '0123456789',
> 'systemFamily': 'To be filled by O.E.M.', 'systemVersion': '0123456789',
> 'systemUUID': '00000000-0000-0000-0000-0CC47A0F6B2A',
> 'systemManufacturer': 'Supermicro'}
> (END)
>
>
>
> On Fri, Feb 3, 2017 at 12:05 AM, Edward Haas <ehaas(a)redhat.com> wrote:
>
>> Hello Shalabh,
>>
>> Please provide the logs from your node:
>> - messages
>> - vdsm/vdsm.log, vdsm/supervdsm.log
>>
>> It may be that you are missing openvswitch installed, although VDSM
>> should not require it for its operation.
>>
>> Thanks,
>> Edy.
>>
>>
>> On Thu, Feb 2, 2017 at 2:10 PM, Shalabh Goel <shalabhgoel13(a)gmail.com>
>> wrote:
>>
>>> HI,
>>>
>>> I am getting the following error on my node after rebooting it.
>>>
>>> VDSM ovirtnode2 command HostSetupNetworksVDS failed: Executing commands
>>> failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database
>>> connection failed (No such file or directory)
>>>
>>>
>>> To solve this, I am trying to restart ovsdb-server using the following
>>> command,
>>>
>>> ovsdb-server --remote=punix:*/var/run/openvswitch/db.sock*
>>> --remote=db:Open_vSwitch,Open_vSwitch,manager_options
>>> --private-key=db:Open_vSwitch,SSL,private_key--certificate=db:Open_vSwitch,SSL,certificate
>>> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
>>>
>>> But I am getting the following error.
>>>
>>> ovsdb-server: /var/run/openvswitch/ovsdb-server.pid.tmp: create failed
>>> (No such file or directory)
>>>
>>> How to restart the ovsdb-server?? Also ovirtmgmt network is missing from
>>> my node. It happened after I rebooted my node after it got upgraded to
>>> Ovirt 4.1
>>>
>>> --
>>> Shalabh Goel
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
> Shalabh Goel
>
7 years, 9 months
Hosted engine errors after 4.1 upgrade.
by Todd Punderson
Seeing issues with my hosted engine, it seems it's unable to extract
vm.conf from storage. My ovirt-hosted-engine-ha/agent.log is full of this
repeating over and over. This is happening on all 3 of my hosts. My storage
is glusterfs on the hosts themselves.
Hopefully this is enough info to get started.
Thanks!
MainThread::INFO::2017-02-07
19:27:33,063::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm)
Initializing VDSM
MainThread::INFO::2017-02-07
19:27:35,455::hosted_engine::639::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Connecting the storage
MainThread::INFO::2017-02-07
19:27:35,456::storage_server::219::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2017-02-07
19:27:40,169::storage_server::226::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Connecting storage server
MainThread::INFO::2017-02-07
19:27:40,202::storage_server::233::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server)
Refreshing the storage domain
MainThread::INFO::2017-02-07
19:27:40,418::hosted_engine::666::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Preparing images
MainThread::INFO::2017-02-07
19:27:40,419::image::126::ovirt_hosted_engine_ha.lib.image.Image::(prepare_images)
Preparing images
MainThread::INFO::2017-02-07
19:27:43,370::hosted_engine::669::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_storage_images)
Reloading vm.conf from the shared storage domain
MainThread::INFO::2017-02-07
19:27:43,371::config::206::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Trying to get a fresher copy of vm configuration from the OVF_STORE
MainThread::INFO::2017-02-07
19:27:45,968::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:3e14c1b5-5ade-4827-aad4-66c59824acd2,
volUUID:3cbeeb3b-f755-4d42-a654-8dab34213792
MainThread::INFO::2017-02-07
19:27:46,257::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan)
Found OVF_STORE: imgUUID:9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce,
volUUID:8f4d69c5-73a7-4e8c-a58f-909b55efec7d
MainThread::INFO::2017-02-07
19:27:46,355::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2017-02-07
19:27:46,366::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
OVF_STORE volume path:
/rhev/data-center/mnt/glusterSD/ovirt01-gluster.doonga.org:
_engine/536cd721-4396-4029-b1ea-8ce84738137e/images/9b49968b-5a62-4ab2-a2c5-b94bc0b2d3ce/8f4d69c5-73a7-4e8c-a58f-909b55efec7d
MainThread::ERROR::2017-02-07
19:27:46,389::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Unable to extract HEVM OVF
MainThread::ERROR::2017-02-07
19:27:46,390::config::235::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file)
Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf
7 years, 9 months
Ovirt console ticket time threshold and password
by rightkicktech.gmail.com
------BKFKJ8HFVSAML87AJHD1T3RNOSRSVF
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
charset=UTF-8
Hi all,
Is there any standard recommended way to alter the default value of 120 secs set on spice console?
Also, can the password be disabled if needed? There are several hacks floating arround, but none seems clean.
Thanx,
Alex
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------BKFKJ8HFVSAML87AJHD1T3RNOSRSVF
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: 8bit
Hi all,<br>
<br>
Is there any standard recommended way to alter the default value of 120 secs set on spice console?<br>
Also, can the password be disabled if needed? There are several hacks floating arround, but none seems clean.<br>
<br>
Thanx,<br>
Alex<br>
<br>
<br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------BKFKJ8HFVSAML87AJHD1T3RNOSRSVF--
7 years, 9 months
Re: [ovirt-users] How to update ovirt ng nodes
by Massimo Mad
Hi David,
I try to use yum update but i have this error :
yum update
Loaded plugins: fastestmirror, imgbased-warning
Warning: yum operations are not persisted across upgrades!
The problem is that on the server i have this repository:
ovirt-4.0.repo
ovirt-4.0-dependencies.repo
With this repository it's possible upgrade ovirt only only between minor
releases for example from 4.0.1 to 4.0.6, i want to upgrade the host from
4.0 to 4.1.
Regards Massimo
7 years, 9 months
ovirt-engine-notifier tuning?
by Nicolás
Hi,
Is there a way to configure the frequency on which the
ovirt-engine-notifier notifies about the same issue? We've recently had
a Domain Storage with low space and an email was sent cca. every 15
minutes. Is there a way to increase this notification time?
This is oVirt 4.0.4-4.
Thanks.
7 years, 9 months
UPDATED Invitation: Deep Dive: virt-sparsify @ Wed 2017-02-08 17:00 - 17:30 (IST)
by Shmuel Melamud
Hi!
I invite you to Deep Dive session dedicated to the new virt-sparsify
feature of oVirt 4.1.
When: Wed 2017-02-08 17:00 - 17:30 (IST)
Join here: https://youtu.be/ayseKlGLwHI (NOTE location changed)
Virt-sparsify is a new feature in oVirt 4.1. It allows to remove unused
space from a disk image and return it back to the storage.
We will learn why this feature is needed and how to use it from user
perspective. After that we will look deeper into implementation and explain
prerequisites of virt-sparsify operation.
Shmuel
7 years, 9 months
USB 2.0 compatibility
by Jonathan Woytek
Hello all. Please forgive me if this is covered officially somewhere, but
my google-fu has been unable to help me find much recent information. I
have a couple of devices (RTL SDR's) that require USB 2.0, but the pass
through seems to only support a virtual 1.1 hub for devices. The devices
are correctly recognized and configured by the hypervisor host, but when
connected to a VM, they show up on a 1.1 hub, and the driver fails to load
as a result.
I have found some older information that only USB 1.1 devices were
supported via USB pass through. I haven't found much newer information,
though, and also couldn't find newer problem reports.
I'm running 4.1, upgraded from a 4.0 installation, on Centos 7.
I was hoping to build a little RF capture and analysis cluster using
several of these devices, but this is a bit of a roadblock! Any ideas?
Jonathan
--
Sent from my Commodore64
7 years, 9 months