which all log files? Actually I am new to Ovirt, so it would be really
helpful if you can tell me which ones??
Thanks
On Mon, Feb 6, 2017 at 4:39 PM, Edward Haas <ehaas(a)redhat.com> wrote:
Please package the logs (tar or zip) and send them.
On Mon, Feb 6, 2017 at 12:05 PM, Shalabh Goel <shalabhgoel13(a)gmail.com>
wrote:
> Yes, I am using OVS as the switch type and I did not know that it was not
> supported officially.
>
> The output of ovs-vsctl show is as follows:
>
> f634d53e-4849-488b-8454-6b1fafa7c6ac
> ovs_version: "2.6.90"
>
> I am attaching OVS switch logs below:
>
> /var/log/openvswitch/ovsdb-server.log
>
>
> 2017-02-06T09:46:07.788Z|00001|vlog|INFO|opened log file
> /var/log/openvswitch/ovsdb-server.log
> 2017-02-06T09:46:07.791Z|00002|ovsdb_server|INFO|ovsdb-server (Open
> vSwitch) 2.6.90
> 2017-02-06T09:46:17.802Z|00003|memory|INFO|2296 kB peak resident set
> size after 10.0 seconds
> 2017-02-06T09:46:17.802Z|00004|memory|INFO|cells:16 json-caches:1
> monitors:1 sessions:1
>
> ovs-vswitchd.log
>
>
> 2017-02-06T09:46:07.999Z|00001|vlog|INFO|opened log file
> /var/log/openvswitch/ovs-vswitchd.log
> 2017-02-06T09:46:08.036Z|00002|ovs_numa|INFO|Discovered 24 CPU cores on
> NUMA node 0
> 2017-02-06T09:46:08.036Z|00003|ovs_numa|INFO|Discovered 24 CPU cores on
> NUMA node 1
> 2017-02-06T09:46:08.036Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and
> 48 CPU cores
> 2017-02-06T09:46:08.037Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
> connecting...
> 2017-02-06T09:46:08.037Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock:
> connected
> 2017-02-06T09:46:08.039Z|00007|bridge|INFO|ovs-vswitchd (Open vSwitch)
> 2.6.90
>
> What should I do now?
>
> The engine says that "Host host2 does not comply with the cluster Default
> networks, the following networks are missing on host: 'ovirtmgmt'"
>
> What other logs should I attach?
>
> Thanks
>
> Shalabh Goel
>
> On Sun, Feb 5, 2017 at 1:10 PM, Edward Haas <ehaas(a)redhat.com> wrote:
>
>> Based on what I can see, you used OVS as the switch type and it seems
>> ovs (openvswitch) is not properly installed on your host.
>> Make sure that you have ovs operational by issuing "ovs-vsctl show".
>>
>> You should note that OVS network support is not an official release
>> feature, and you should use it on 4.1 and up versions.
>> Fixes will be probably submitted to master (appearing in nightly builds).
>>
>> Next time please include the mailing-list in your replies and attach the
>> log files, it is less spamming.
>>
>> Thanks,
>> Edy.
>>
>> On Fri, Feb 3, 2017 at 5:07 AM, Shalabh Goel <shalabhgoel13(a)gmail.com>
>> wrote:
>>
>>> log from messages
>>>
>>> Feb 3 08:27:53 ovirtnode3 ovs-vsctl:
ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock:
>>> database connection failed (No such file or directory)
>>> Feb 3 08:27:53 ovirtnode3 journal: vdsm vds ERROR Executing commands
>>> failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database
>>> connection failed (No su
>>> ch file or directory)#012Traceback (most recent call last):#012 File
>>> "/usr/share/vdsm/API.py", line 1531, in setupNetworks#012
>>> supervdsm.getProxy().setup
>>> Networks(networks, bondings, options)#012 File
>>> "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 53, in
>>> __call__#012 return callMethod()#012 Fi
>>> le "/usr/lib/python2.7/site-packages/vdsm/supervdsm.py", line 51,
in
>>> <lambda>#012 **kwargs)#012 File "<string>", line 2,
in
>>> setupNetworks#012 File "/usr
>>> /lib64/python2.7/multiprocessing/managers.py", line 773, in
>>> _callmethod#012 raise convert_to_error(kind,
>>> result)#012ConfigNetworkError: (21, 'Executing co
>>> mmands failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock: database
>>> connection failed (No such file or directory)')
>>>
>>> Log from vdsm.log
>>>
>>> Reactor thread::INFO::2017-02-03 08:32:24,638::protocoldetector
>>> ::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepted
>>> connection from ::1:53082
>>> Reactor thread::DEBUG::2017-02-03 08:32:24,643::protocoldetector
>>> ::91::ProtocolDetector.Detector::(__init__) Using required_size=11
>>> Reactor thread::INFO::2017-02-03 08:32:24,643::protocoldetector
>>> ::127::ProtocolDetector.Detector::(handle_read) Detected protocol xml
>>> from ::1:53082
>>> Reactor thread::DEBUG::2017-02-03 08:32:24,643::bindingxmlrpc::1
>>> 317::XmlDetector::(handle_socket) xml over http detected from
('::1',
>>> 53082)
>>> BindingXMLRPC::INFO::2017-02-03
08:32:24,643::xmlrpc::73::vds.XMLRPCServer::(handle_request)
>>> Starting request handler for ::1:53082
>>> Thread-20::INFO::2017-02-03 08:32:24,644::xmlrpc::83::vds.
>>> XMLRPCServer::(_process_requests) Request handler for ::1:53082 started
>>> Thread-20::DEBUG::2017-02-03
08:32:24,644::bindingxmlrpc::1263::vds::(wrapper)
>>> client [::1]::call getAllVmStats with () {}
>>> Thread-20::DEBUG::2017-02-03
08:32:24,644::bindingxmlrpc::1270::vds::(wrapper)
>>> return getAllVmStats with {'status': {'message':
'Done', 'code': 0},
>>> 'statsLis
>>> t': (suppressed)}
>>> Thread-20::INFO::2017-02-03
08:32:24,644::bindingxmlrpc::1297::vds::(wrapper)
>>> RPC call getAllVmStats finished (code=0) in 0.00 seconds
>>> Thread-20::INFO::2017-02-03 08:32:24,645::xmlrpc::91::vds.
>>> XMLRPCServer::(_process_requests) Request handler for ::1:53082 stopped
>>> periodic/0::DEBUG::2017-02-03 08:32:24,907::sampling::508::v
>>> irt.sampling.VMBulkSampler::(__call__) sampled timestamp 4347942.15
>>> elapsed 0.010 acquired True d
>>> omains all
>>> periodic/2::DEBUG::2017-02-03 08:32:24,908::task::599::stora
>>> ge.TaskManager.Task::(_updateState)
(Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
>>> moving from sta
>>> te init -> state preparing
>>> periodic/2::INFO::2017-02-03
08:32:24,908::logUtils::49::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>> periodic/2::INFO::2017-02-03
08:32:24,908::logUtils::52::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>> periodic/2::DEBUG::2017-02-03 08:32:24,909::task::1195::stor
>>> age.TaskManager.Task::(prepare)
(Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
>>> finished: {}
>>> periodic/2::DEBUG::2017-02-03 08:32:24,909::task::599::stora
>>> ge.TaskManager.Task::(_updateState)
(Task='bb07f89c-6165-42bc-902f-1b7701b813fa')
>>> moving from sta
>>> te preparing -> state finished
>>> periodic/2::DEBUG::2017-02-03 08:32:24,909::resourceManager:
>>> :910::storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll
>>> requests {} resources {}
>>> periodic/2::DEBUG::2017-02-03 08:32:24,909::resourceManager:
>>> :947::storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll
>>> requests {}
>>> periodic/2::DEBUG::2017-02-03
08:32:24,909::task::997::storage.TaskManager.Task::(_decref)
>>> (Task='bb07f89c-6165-42bc-902f-1b7701b813fa') ref 0 aborting False
>>> jsonrpc/4::DEBUG::2017-02-03 08:32:26,653::__init__::532::j
>>> sonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getAllVmStats'
>>> in bridge with {}
>>> jsonrpc/4::DEBUG::2017-02-03 08:32:26,653::__init__::557::j
>>> sonrpc.JsonRpcServer::(_handle_request) Return 'Host.getAllVmStats'
in
>>> bridge with (suppressed)
>>> jsonrpc/4::INFO::2017-02-03 08:32:26,653::__init__::515::j
>>> sonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getAllVmStats
>>> succeeded in 0.00 seconds
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,706::__init__::532::j
>>> sonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getStats' in
>>> bridge with {}
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,706::__init__::532::j
>>> sonrpc.JsonRpcServer::(_handle_request) Calling 'Host.getStats' in
>>> bridge with {}
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,707::task::599::stora
>>> ge.TaskManager.Task::(_updateState)
(Task='9285329e-485b-4165-8bdc-4f411f394356')
>>> moving from state init -> state preparing
>>> jsonrpc/6::INFO::2017-02-03
08:32:27,707::logUtils::49::dispatcher::(wrapper)
>>> Run and protect: repoStats(options=None)
>>> jsonrpc/6::INFO::2017-02-03
08:32:27,707::logUtils::52::dispatcher::(wrapper)
>>> Run and protect: repoStats, Return response: {}
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,707::task::1195::stor
>>> age.TaskManager.Task::(prepare)
(Task='9285329e-485b-4165-8bdc-4f411f394356')
>>> finished: {}
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,707::task::599::stora
>>> ge.TaskManager.Task::(_updateState)
(Task='9285329e-485b-4165-8bdc-4f411f394356')
>>> moving from state preparing -> state finished
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,708::resourceManager:
>>> :910::storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll
>>> requests {} resources {}
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,708::resourceManager:
>>> :947::storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll
>>> requests {}
>>> jsonrpc/6::DEBUG::2017-02-03
08:32:27,708::task::997::storage.TaskManager.Task::(_decref)
>>> (Task='9285329e-485b-4165-8bdc-4f411f394356') ref 0 aborting False
>>> jsonrpc/6::DEBUG::2017-02-03 08:32:27,712::__init__::557::j
>>> sonrpc.JsonRpcServer::(_handle_request) Return 'Host.getStats' in
>>> bridge with {'cpuStatistics': {'28': {'cpuUser':
'0.13', 'nodeIndex': 0,
>>> 'cpuSys': '0.00', 'cpuIdle': '99.87'},
'29': {'cpuUser': '0.00',
>>> 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '24': {'cpuUser':
>>> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '25':
>>> {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys':
'0.07', 'cpuIdle': '99.80'},
>>> '26': {'cpuUser': '0.00', 'nodeIndex': 0,
'cpuSys': '0.00', 'cpuIdle':
>>> '100.00'}, '27': {'cpuUser': '0.00',
'nodeIndex': 0, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}, '20': {'cpuUser':
'0.00', 'nodeIndex': 1, 'cpuSys':
>>> '0.00', 'cpuIdle': '100.00'}, '21':
{'cpuUser': '0.00', 'nodeIndex': 1,
>>> 'cpuSys': '0.00', 'cpuIdle': '100.00'},
'22': {'cpuUser': '0.00',
>>> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '23': {'cpuUser':
>>> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '46':
>>> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
'0.00', 'cpuIdle': '100.00'},
>>> '47': {'cpuUser': '0.00', 'nodeIndex': 1,
'cpuSys': '0.00', 'cpuIdle':
>>> '100.00'}, '44': {'cpuUser': '0.00',
'nodeIndex': 1, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}, '45': {'cpuUser':
'0.00', 'nodeIndex': 1, 'cpuSys':
>>> '0.00', 'cpuIdle': '100.00'}, '42':
{'cpuUser': '0.00', 'nodeIndex': 1,
>>> 'cpuSys': '0.00', 'cpuIdle': '100.00'},
'43': {'cpuUser': '0.00',
>>> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '40': {'cpuUser':
>>> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '41':
>>> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
'0.00', 'cpuIdle': '100.00'},
>>> '1': {'cpuUser': '0.13', 'nodeIndex': 0,
'cpuSys': '0.07', 'cpuIdle':
>>> '99.80'}, '0': {'cpuUser': '0.00',
'nodeIndex': 0, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}, '3': {'cpuUser':
'0.07', 'nodeIndex': 0, 'cpuSys':
>>> '0.07', 'cpuIdle': '99.86'}, '2':
{'cpuUser': '0.07', 'nodeIndex': 0,
>>> 'cpuSys': '0.00', 'cpuIdle': '99.93'},
'5': {'cpuUser': '0.00',
>>> 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '4': {'cpuUser':
>>> '0.00', 'nodeIndex': 0, 'cpuSys': '0.07',
'cpuIdle': '99.93'}, '7':
>>> {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys':
'0.00', 'cpuIdle': '100.00'},
>>> '6': {'cpuUser': '0.20', 'nodeIndex': 0,
'cpuSys': '0.00', 'cpuIdle':
>>> '99.80'}, '9': {'cpuUser': '0.00',
'nodeIndex': 0, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}, '8': {'cpuUser':
'0.00', 'nodeIndex': 0, 'cpuSys':
>>> '0.00', 'cpuIdle': '100.00'}, '39':
{'cpuUser': '0.00', 'nodeIndex': 1,
>>> 'cpuSys': '0.00', 'cpuIdle': '100.00'},
'38': {'cpuUser': '0.00',
>>> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '11': {'cpuUser':
>>> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '10':
>>> {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys':
'0.00', 'cpuIdle': '100.00'},
>>> '13': {'cpuUser': '0.00', 'nodeIndex': 1,
'cpuSys': '0.00', 'cpuIdle':
>>> '100.00'}, '12': {'cpuUser': '0.00',
'nodeIndex': 1, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}, '15': {'cpuUser':
'0.00', 'nodeIndex': 1, 'cpuSys':
>>> '0.00', 'cpuIdle': '100.00'}, '14':
{'cpuUser': '0.00', 'nodeIndex': 1,
>>> 'cpuSys': '0.00', 'cpuIdle': '100.00'},
'17': {'cpuUser': '0.00',
>>> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '16': {'cpuUser':
>>> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '19':
>>> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys':
'0.00', 'cpuIdle': '100.00'},
>>> '18': {'cpuUser': '0.00', 'nodeIndex': 1,
'cpuSys': '0.00', 'cpuIdle':
>>> '100.00'}, '31': {'cpuUser': '0.00',
'nodeIndex': 0, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}, '30': {'cpuUser':
'0.00', 'nodeIndex': 0, 'cpuSys':
>>> '0.00', 'cpuIdle': '100.00'}, '37':
{'cpuUser': '0.00', 'nodeIndex': 1,
>>> 'cpuSys': '0.00', 'cpuIdle': '100.00'},
'36': {'cpuUser': '0.00',
>>> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle':
'100.00'}, '35': {'cpuUser':
>>> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
'cpuIdle': '100.00'}, '34':
>>> {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys':
'0.00', 'cpuIdle': '100.00'},
>>> '33': {'cpuUser': '0.00', 'nodeIndex': 0,
'cpuSys': '0.00', 'cpuIdle':
>>> '100.00'}, '32': {'cpuUser': '0.00',
'nodeIndex': 0, 'cpuSys': '0.00',
>>> 'cpuIdle': '100.00'}}, 'numaNodeMemFree':
{'1': {'memPercent': 3,
>>> 'memFree': '63780'}, '0': {'memPercent': 4,
'memFree': '63426'}},
>>> 'memShared': 0, 'thpState': 'always',
'ksmMergeAcrossNodes': True,
>>> 'vmCount': 0, 'memUsed': '2',
'storageDomains': {}, 'incomingVmMigrations':
>>> 0, 'network': {'ib0': {'txErrors': '0',
'state': 'down', 'sampleTime':
>>> 1486090944.903051, 'name': 'ib0', 'tx': '0',
'txDropped': '0', 'rx': '0',
>>> 'rxErrors': '0', 'speed': '1000',
'rxDropped': '0'}, 'lo': {'txErrors':
>>> '0', 'state': 'up', 'sampleTime':
1486090944.903051, 'name': 'lo', 'tx':
>>> '56174388', 'txDropped': '0', 'rx':
'56174388', 'rxErrors': '0', 'speed':
>>> '1000', 'rxDropped': '0'}, 'eno2':
{'txErrors': '0', 'state': 'down',
>>> 'sampleTime': 1486090944.903051, 'name': 'eno2',
'tx': '0', 'txDropped':
>>> '0', 'rx': '0', 'rxErrors': '0',
'speed': '1000', 'rxDropped': '0'},
>>> 'eno1': {'txErrors': '0', 'state':
'up', 'sampleTime': 1486090944.903051,
>>> 'name': 'eno1', 'tx': '26032823',
'txDropped': '0', 'rx': '174722400',
>>> 'rxErrors': '0', 'speed': '1000',
'rxDropped': '5624'}, ';vdsmdummy;':
>>> {'txErrors': '0', 'state': 'down',
'sampleTime': 1486090944.903051, 'name':
>>> ';vdsmdummy;', 'tx': '0', 'txDropped':
'0', 'rx': '0', 'rxErrors': '0',
>>> 'speed': '1000', 'rxDropped': '0'}},
'txDropped': '0', 'anonHugePages':
>>> '34', 'ksmPages': 100, 'elapsedTime':
'287.81', 'cpuLoad': '0.07',
>>> 'cpuSys': '0.00', 'diskStats': {'/var/log':
{'free': '15072'},
>>> '/var/log/core': {'free': '15072'},
'/var/run/vdsm/': {'free': '64418'},
>>> '/tmp': {'free': '914756'}}, 'cpuUserVdsmd':
'0.13', 'netConfigDirty':
>>> 'True', 'memCommitted': 0, 'ksmState': False,
'vmMigrating': 0, 'ksmCpu':
>>> 0, 'memAvailable': 127200, 'bootTime': '1486037670',
'haStats': {'active':
>>> False, 'configured': False, 'score': 0,
'localMaintenance': False,
>>> 'globalMaintenance': False}, 'momStatus': 'active',
'rxDropped': '5624',
>>> 'outgoingVmMigrations': 0, 'swapTotal': 4095,
'swapFree': 4095, 'dateTime':
>>> '2017-02-03T03:02:27 GMT', 'cpuUser': '0.02',
'memFree': 127456, 'cpuIdle':
>>> '99.98', 'vmActive': 0, 'v2vJobs': {},
'cpuSysVdsmd': '0.07'}
>>> jsonrpc/6::INFO::2017-02-03 08:32:27,713::__init__::515::j
>>> sonrpc.JsonRpcServer::(_serveRequest) RPC call Host.getStats succeeded
>>> in 0.00 seconds
>>> Reactor thread::INFO::2017-02-03 08:32:39,661::protocoldetector
>>> ::72::ProtocolDetector.AcceptorImpl::(handle_accept) Accepted
>>> connection from ::1:53084
>>>
>>> supervdsm.log
>>>
>>>
>>> File "/usr/share/vdsm/supervdsmServer", line 95, in wrapper
>>> res = func(*args, **kwargs)
>>> File "/usr/share/vdsm/supervdsmServer", line 143, in umount
>>> timeout=timeout)
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py",
line
>>> 281, in _umount
>>> _runcmd(cmd, timeout)
>>> File "/usr/lib/python2.7/site-packages/vdsm/storage/mount.py",
line
>>> 297, in _runcmd
>>> raise MountError(rc, ";".join((out, err)))
>>> MountError: (32, ';umount: /rhev/data-center/mnt/10.1.235.6:_iso:
>>> mountpoint not found\n')
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:26:59,495::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:26:59,495::commands::69::storage.HBA::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /usr/libexec/vdsm/fc-scan (cwd N
>>> one)
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:26:59,579::supervdsmServer:
>>> :100::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
>>> MainThread::DEBUG::2017-02-03
08:27:26,367::supervdsmServer::311::SuperVdsm.Server::(main)
>>> Terminated normally
>>> MainThread::DEBUG::2017-02-03
08:27:36,148::__init__::47::blivet::(register_device_format)
>>> registered device format class DeviceFormat as None
>>> MainThread::DEBUG::2017-02-03
08:27:36,149::__init__::47::blivet::(register_device_format)
>>> registered device format class BIOSBoot as biosboot
>>> MainThread::DEBUG::2017-02-03
08:27:36,150::__init__::47::blivet::(register_device_format)
>>> registered device format class DiskLabel as disklabel
>>> MainThread::DEBUG::2017-02-03
08:27:36,150::__init__::47::blivet::(register_device_format)
>>> registered device format class DMRaidMember as dmraidmember
>>> MainThread::DEBUG::2017-02-03
08:27:36,151::__init__::47::blivet::(register_device_format)
>>> registered device format class Ext2FS as ext2
>>> MainThread::DEBUG::2017-02-03
08:27:36,152::__init__::47::blivet::(register_device_format)
>>> registered device format class Ext3FS as ext3
>>> MainThread::DEBUG::2017-02-03
08:27:36,152::__init__::47::blivet::(register_device_format)
>>> registered device format class Ext4FS as ext4
>>> MainThread::DEBUG::2017-02-03
08:27:36,152::__init__::47::blivet::(register_device_format)
>>> registered device format class FATFS as vfat
>>> MainThread::DEBUG::2017-02-03
08:27:36,152::__init__::47::blivet::(register_device_format)
>>> registered device format class EFIFS as efi
>>> MainThread::DEBUG::2017-02-03
08:27:36,152::__init__::47::blivet::(register_device_format)
>>> registered device format class BTRFS as btrfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,153::__init__::47::blivet::(register_device_format)
>>> registered device format class GFS2 as gfs2
>>> MainThread::DEBUG::2017-02-03
08:27:36,153::__init__::47::blivet::(register_device_format)
>>> registered device format class JFS as jfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,153::__init__::47::blivet::(register_device_format)
>>> registered device format class ReiserFS as reiserfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,153::__init__::47::blivet::(register_device_format)
>>> registered device format class XFS as xfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,153::__init__::47::blivet::(register_device_format)
>>> registered device format class HFS as hfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,153::__init__::47::blivet::(register_device_format)
>>> registered device format class AppleBootstrapFS as appleboot
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class HFSPlus as hfs+
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class MacEFIFS as macefi
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class NTFS as ntfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class NFS as nfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class NFSv4 as nfs4
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class Iso9660FS as iso9660
>>> MainThread::DEBUG::2017-02-03
08:27:36,154::__init__::47::blivet::(register_device_format)
>>> registered device format class NoDevFS as nodev
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class DevPtsFS as devpts
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class ProcFS as proc
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class SysFS as sysfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class TmpFS as tmpfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class BindFS as bind
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class SELinuxFS as selinuxfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class USBFS as usbfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,155::__init__::47::blivet::(register_device_format)
>>> registered device format class EFIVarFS as efivarfs
>>> MainThread::DEBUG::2017-02-03
08:27:36,179::__init__::47::blivet::(register_device_format)
>>> registered device format class LUKS as luks
>>> MainThread::DEBUG::2017-02-03
08:27:36,181::__init__::47::blivet::(register_device_format)
>>> registered device format class LVMPhysicalVolume as lvmpv
>>> MainThread::DEBUG::2017-02-03
08:27:36,181::__init__::47::blivet::(register_device_format)
>>> registered device format class MDRaidMember as mdmember
>>> MainThread::DEBUG::2017-02-03
08:27:36,181::__init__::47::blivet::(register_device_format)
>>> registered device format class MultipathMember as multipath_member
>>> MainThread::DEBUG::2017-02-03
08:27:36,181::__init__::47::blivet::(register_device_format)
>>> registered device format class PPCPRePBoot as prepboot
>>> MainThread::DEBUG::2017-02-03
08:27:36,182::__init__::47::blivet::(register_device_format)
>>> registered device format class SwapSpace as swap
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::69::blivet::(log_exception_info)
>>> IGNORED: Caught exception, continuing.
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::72::blivet::(log_exception_info)
>>> IGNORED: Problem description: failed to get initiator name from
>>> iscsi firmware
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::73::blivet::(log_exception_info)
>>> IGNORED: Begin exception details.
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::76::blivet::(log_exception_info)
>>> IGNORED: Traceback (most recent call last):
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::76::blivet::(log_exception_info)
>>> IGNORED: File "/usr/lib/python2.7/site-packages/blivet
>>> /iscsi.py", line 146, in __init__
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::76::blivet::(log_exception_info)
>>> IGNORED: initiatorname = libiscsi.get_firmware_initi
>>> ator_name()
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::76::blivet::(log_exception_info)
>>> IGNORED: IOError: Unknown error
>>> MainThread::DEBUG::2017-02-03
08:27:36,216::storage_log::77::blivet::(log_exception_info)
>>> IGNORED: End exception details.
>>> MainThread::DEBUG::2017-02-03
08:27:36,227::supervdsmServer::271::SuperVdsm.Server::(main)
>>> Making sure I'm root - SuperVdsm
>>> MainThread::DEBUG::2017-02-03
08:27:36,227::supervdsmServer::280::SuperVdsm.Server::(main)
>>> Parsing cmd args
>>> MainThread::DEBUG::2017-02-03
08:27:36,227::supervdsmServer::283::SuperVdsm.Server::(main)
>>> Cleaning old socket /var/run/vdsm/svdsm.sock
>>> MainThread::DEBUG::2017-02-03
08:27:36,227::supervdsmServer::287::SuperVdsm.Server::(main)
>>> Setting up keep alive thread
>>> MainThread::DEBUG::2017-02-03
08:27:36,228::supervdsmServer::293::SuperVdsm.Server::(main)
>>> Creating remote object manager
>>> MainThread::DEBUG::2017-02-03
08:27:36,228::fileUtils::246::storage.fileUtils::(chown)
>>> Changing owner for /var/run/vdsm/svdsm.sock, to (36:36)
>>> MainThread::DEBUG::2017-02-03
08:27:36,228::supervdsmServer::303::SuperVdsm.Server::(main)
>>> Started serving super vdsm object
>>> sourceRoute::DEBUG::2017-02-03 08:27:36,229::sourceroutethrea
>>> d::79::root::(_subscribeToInotifyLoop)
sourceRouteThread.subscribeToInotifyLoop
>>> started
>>> restore-net::INFO::2017-02-03
08:27:37,160::vdsm-restore-net-config::465::root::(restore)
>>> networks already restored. doing nothing.
>>> MainProcess|hsm/init::DEBUG::2017-02-03 08:27:39,601::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call hbaRescan with () {}
>>> MainProcess|hsm/init::DEBUG::2017-02-03
08:27:39,602::commands::69::storage.HBA::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /usr/libexec/vdsm/fc-scan (cwd No
>>> ne)
>>> MainProcess|hsm/init::DEBUG::2017-02-03 08:27:39,691::supervdsmServer:
>>> :100::SuperVdsm.ServerCallback::(wrapper) return hbaRescan with None
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,355::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {}
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,360::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /sbin/ip route show to 0.0.0.0/0
>>> table
>>> main (cwd None)
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,364::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,408::libvirtconnection::160::root::(get)
>>> trying to connect libvirt
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,450::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /usr/sbin/tc qdisc show (cwd None)
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,453::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,454::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /bin/systemctl --no-pager list-unit-fil
>>> es (cwd None)
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,530::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,530::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /bin/systemctl status openvswitch.servi
>>> ce (cwd None)
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03
08:27:48,561::commands::93::root::(execCmd)
>>> FAILED: <err> = ''; <rc> = 3
>>> MainProcess|jsonrpc/4::DEBUG::2017-02-03 08:27:48,561::supervdsmServer:
>>> :100::SuperVdsm.ServerCallback::(wrapper) return network_caps with
>>> {'bridges': {}, 'bo
>>> ndings': {}, 'nameservers': ['10.1.1.8'], 'nics':
{'ib0':
>>> {'ipv6autoconf': True, 'addr': '',
'ipv6gateway': '::', 'dhcpv6': False,
>>> 'ipv6addrs': [], 'mtu': '4
>>> 092', 'dhcpv4': False, 'netmask': '',
'ipv4defaultroute': False,
>>> 'ipv4addrs': [], 'hwaddr':
'80:00:02:08:fe:80:00:00:00:00
>>> :00:00:0c:c4:7a:ff:ff:0f:d8:85', 's
>>> peed': 0, 'gateway': ''}, 'eno1':
{'ipv6autoconf': False, 'addr':
>>> '10.1.235.5', 'ipv6gateway': '::', 'dhcpv6':
False, 'ipv6addrs': [], 'mtu':
>>> '1500', 'dhcpv4
>>> ': False, 'netmask': '255.255.0.0',
'ipv4defaultroute': True,
>>> 'ipv4addrs': ['10.1.235.5/16'], 'hwaddr':
'0c:c4:7a:0f:6b:2a',
>>> 'speed': 1000, 'gateway': '10.1.
>>> 0.1'}, 'eno2': {'ipv6autoconf': True, 'addr':
'', 'ipv6gateway': '::',
>>> 'dhcpv6': False, 'ipv6addrs': [], 'mtu':
'1500', 'dhcpv4': False,
>>> 'netmask': '', 'ipv4
>>> defaultroute': False, 'ipv4addrs': [], 'hwaddr':
'0c:c4:7a:0f:6b:2b',
>>> 'speed': 0, 'gateway': ''}}, 'supportsIPv6':
True, 'vlans': {}, 'networks':
>>> {}}
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03 08:27:53,690::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call setupNetworks with ({},
>>> {u'connectivityC
>>> heck': u'true', u'connectivityTimeout': 120}) {}
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,691::api::247::root::(setupNetworks)
>>> Setting up network according to configuration:
networks:{u'ovirtmgmt':
>>> {u'ipv6autoconf': False, u'nic': u'eno1',
u'ipaddr': u'10.1.235.5',
>>> u'netmask': u'255.255.0.0', u'mtu': 1500,
u'switch': u'ovs', u'dhcpv6':
>>> False, u'STP': u'no', u'bridged': u'true',
u'gateway': u'10.1.0.1',
>>> u'defaultRoute': True}}, bondings:{},
options:{u'connectivityCheck':
>>> u'true', u'connectivityTimeout': 120}
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,691::api::252::root::(setupNetworks)
>>> Validating configuration
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,694::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /sbin/ip route show to 0.0.0.0/0
>>> table main (cwd None)
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,698::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,771::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /usr/sbin/tc qdisc show (cwd None)
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,774::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,774::vsctl::57::root::(commit)
>>> Executing commands: /usr/bin/ovs-vsctl --oneline --format=json -- list
>>> Bridge -- list Port -- list Interface
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,774::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /usr/bin/ovs-vsctl --oneline --format=json
>>> -- list Bridge -- list Port -- list Interface (cwd None)
>>> MainProcess|jsonrpc/5::DEBUG::2017-02-03
08:27:53,807::commands::93::root::(execCmd)
>>> FAILED: <err> = 'ovs-vsctl: unix:/var/run/openvswitch/db.sock:
>>> database connection failed (No such file or directory)\n'; <rc> =
1
>>> MainProcess|jsonrpc/5::ERROR::2017-02-03 08:27:53,808::supervdsmServer:
>>> :97::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/supervdsmServer", line 95, in wrapper
>>> res = func(*args, **kwargs)
>>> File "/usr/lib/python2.7/site-packages/vdsm/network/api.py",
line
>>> 254, in setupNetworks
>>> netswitch.validate(networks, bondings)
>>> File
"/usr/lib/python2.7/site-packages/vdsm/network/netswitch.py",
>>> line 121, in validate
>>> ovs_switch.validate_network_setup(ovs_nets, ovs_bonds)
>>> File
"/usr/lib/python2.7/site-packages/vdsm/network/ovs/switch.py",
>>> line 40, in validate_network_setup
>>> ovs_networks = info.create_netinfo(info.OvsInfo())['networks']
>>> File
"/usr/lib/python2.7/site-packages/vdsm/network/ovs/info.py",
>>> line 73, in __init__
>>> ovs_db = OvsDB(driver.create())
>>> File
"/usr/lib/python2.7/site-packages/vdsm/network/ovs/info.py",
>>> line 64, in __init__
>>> transaction.add(ifaces_command)
>>> File
"/usr/lib/python2.7/site-packages/vdsm/network/ovs/driver/__init__.py",
>>> line 54, in __exit__
>>> self.result = self.commit()
>>> File
"/usr/lib/python2.7/site-packages/vdsm/network/ovs/driver/vsctl.py",
>>> line 63, in commit
>>> 'Executing commands failed: %s' % '\n'.join(err))
>>> ConfigNetworkError: (21, 'Executing commands failed: ovs-vsctl:
>>> unix:/var/run/openvswitch/db.sock: database connection failed (No such
>>> file or directory)')
>>> MainProcess|Thread-2::DEBUG::2017-02-03 08:27:54,440::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call ksmTune with () {}
>>> MainProcess|Thread-2::DEBUG::2017-02-03 08:27:54,440::supervdsmServer:
>>> :100::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None
>>> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:00,918::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {}
>>> MainProcess|jsonrpc/0::DEBUG::2017-02-03
08:30:00,922::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /sbin/ip route show to 0.0.0.0/0
>>> table main (cwd None)
>>> MainProcess|jsonrpc/0::DEBUG::2017-02-03
08:30:00,927::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/0::DEBUG::2017-02-03
08:30:01,013::commands::69::root::(execCmd)
>>> /usr/bin/taskset --cpu-list 0-47 /usr/sbin/tc qdisc show (cwd None)
>>> MainProcess|jsonrpc/0::DEBUG::2017-02-03
08:30:01,016::commands::93::root::(execCmd)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc/0::DEBUG::2017-02-03 08:30:01,017::supervdsmServer:
>>> :100::SuperVdsm.ServerCallback::(wrapper) return network_caps with
>>> {'bridges': {}, 'bondings': {}, 'nameservers':
['10.1.1.8'], 'nics':
>>> {'ib0': {'ipv6autoconf': True, 'addr': '',
'ipv6gateway': '::', 'dhcpv6':
>>> False, 'ipv6addrs': [], 'mtu': '4092',
'dhcpv4': False, 'netmask': '',
>>> 'ipv4defaultroute': False, 'ipv4addrs': [],
'hwaddr':
>>> '80:00:02:08:fe:80:00:00:00:00:00:00:0c:c4:7a:ff:ff:0f:d8:85',
>>> 'speed': 0, 'gateway': ''}, 'eno1':
{'ipv6autoconf': False, 'addr':
>>> '10.1.235.5', 'ipv6gateway': '::', 'dhcpv6':
False, 'ipv6addrs': [], 'mtu':
>>> '1500', 'dhcpv4': False, 'netmask':
'255.255.0.0', 'ipv4defaultroute':
>>> True, 'ipv4addrs': ['10.1.235.5/16'], 'hwaddr':
'0c:c4:7a:0f:6b:2a',
>>> 'speed': 1000, 'gateway': '10.1.0.1'},
'eno2': {'ipv6autoconf': True,
>>> 'addr': '', 'ipv6gateway': '::',
'dhcpv6': False, 'ipv6addrs': [], 'mtu':
>>> '1500', 'dhcpv4': False, 'netmask': '',
'ipv4defaultroute': False,
>>> 'ipv4addrs': [], 'hwaddr': '0c:c4:7a:0f:6b:2b',
'speed': 0, 'gateway':
>>> ''}}, 'supportsIPv6': True, 'vlans': {},
'networks': {}}
>>> MainProcess|jsonrpc/1::DEBUG::2017-02-03 08:30:01,036::supervdsmServer:
>>> :93::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with ()
>>> {}
>>> MainProcess|jsonrpc/1::DEBUG::2017-02-03 08:30:01,066::supervdsmServer:
>>> :100::SuperVdsm.ServerCallback::(wrapper) return getHardwareInfo with
>>> {'systemProductName': 'X9DRT', 'systemSerialNumber':
'0123456789',
>>> 'systemFamily': 'To be filled by O.E.M.',
'systemVersion': '0123456789',
>>> 'systemUUID': '00000000-0000-0000-0000-0CC47A0F6B2A',
>>> 'systemManufacturer': 'Supermicro'}
>>> (END)
>>>
>>>
>>>
>>> On Fri, Feb 3, 2017 at 12:05 AM, Edward Haas <ehaas(a)redhat.com> wrote:
>>>
>>>> Hello Shalabh,
>>>>
>>>> Please provide the logs from your node:
>>>> - messages
>>>> - vdsm/vdsm.log, vdsm/supervdsm.log
>>>>
>>>> It may be that you are missing openvswitch installed, although VDSM
>>>> should not require it for its operation.
>>>>
>>>> Thanks,
>>>> Edy.
>>>>
>>>>
>>>> On Thu, Feb 2, 2017 at 2:10 PM, Shalabh Goel
<shalabhgoel13(a)gmail.com>
>>>> wrote:
>>>>
>>>>> HI,
>>>>>
>>>>> I am getting the following error on my node after rebooting it.
>>>>>
>>>>> VDSM ovirtnode2 command HostSetupNetworksVDS failed: Executing
>>>>> commands failed: ovs-vsctl: unix:/var/run/openvswitch/db.sock:
>>>>> database connection failed (No such file or directory)
>>>>>
>>>>>
>>>>> To solve this, I am trying to restart ovsdb-server using the
>>>>> following command,
>>>>>
>>>>> ovsdb-server --remote=punix:*/var/run/openvswitch/db.sock*
>>>>> --remote=db:Open_vSwitch,Open_vSwitch,manager_options
>>>>>
--private-key=db:Open_vSwitch,SSL,private_key--certificate=db:Open_vSwitch,SSL,certificate
>>>>> --bootstrap-ca-cert=db:Open_vSwitch,SSL,ca_cert --pidfile --detach
>>>>>
>>>>> But I am getting the following error.
>>>>>
>>>>> ovsdb-server: /var/run/openvswitch/ovsdb-server.pid.tmp: create
>>>>> failed (No such file or directory)
>>>>>
>>>>> How to restart the ovsdb-server?? Also ovirtmgmt network is missing
>>>>> from my node. It happened after I rebooted my node after it got
upgraded to
>>>>> Ovirt 4.1
>>>>>
>>>>> --
>>>>> Shalabh Goel
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users(a)ovirt.org
>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Shalabh Goel
>>>
>>
>>
>
>
> --
> Shalabh Goel
>