Users
  Threads by month 
                
            - ----- 2025 -----
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2024 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2023 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2022 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2021 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2020 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2019 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2018 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2017 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2016 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2015 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2014 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2013 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2012 -----
 - December
 - November
 - October
 - September
 - August
 - July
 - June
 - May
 - April
 - March
 - February
 - January
 - ----- 2011 -----
 - December
 - November
 - October
 
May 2017
- 159 participants
 - 231 discussions
 
                    
                        Hello,
I didn't find anyway to easy list all my vms thanks to the ansible 
modules...
I tried the ovirt4.py script which is able to list the whole facts, so 
vms list, when the number of them is small in a test datacenter, but in 
a production datacenter, I get an issue:
   File "./ovirt4.py", line 262, in <module>
     main()
   File "./ovirt4.py", line 254, in main
     vm_name=args.host,
   File "./ovirt4.py", line 213, in get_data
     vms[name] = get_dict_of_struct(connection, vm)
   File "./ovirt4.py", line 185, in get_dict_of_struct
     (device.name, [ip.address for ip in device.ips]) for device in devices
   File "./ovirt4.py", line 185, in <genexpr>
     (device.name, [ip.address for ip in device.ips]) for device in devices
TypeError: 'NoneType' object is not iterable
What is the simpliest way to get this basic information with sdk4??? 
(with sdk3 : ovirt-shell -E "list vms")
-- 
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5 	
Tél. 33 (0)4 67 54 84 55
Fax  33 (0)4 67 54 84 14
blanchet(a)abes.fr
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            9
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Thank you for replying. I must've hit Reply instead of Reply All.
I copied the ML to this email. Your answer may be helpful to other people
too.
Thanks,
-- Peter
On Thu, May 11, 2017 at 10:37 PM, Sandro Bonazzola <sbonazzo(a)redhat.com>
wrote:
>
>
> On Thu, May 11, 2017 at 11:33 PM, Peter Wood <peterwood.sd(a)gmail.com>
> wrote:
>
>> I believe the question was about Citrix Xen. The URL that was pointed out
>> talks about Xen on RHEL.
>>
>> Are you saying that Citrix Xen VMs conversion to oVirt is also supported?
>>
>
> I would suggest to keep users(a)ovirt.org in CC next time :-)
> I'm not sure about direct conversion from Citrix Xen, according to
> http://libguestfs.org/virt-v2v.1.html it has not been recently tested but
> it's supposed to work.
> In the past there were issues but according to https://access.redhat.com/
> solutions/54076 a workaround is to use p2v instead of v2v.
>
>
>
>
>>
>> On Thu, May 11, 2017 at 8:14 AM, Sandro Bonazzola <sbonazzo(a)redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, May 9, 2017 at 12:29 PM, santosh bhabal <sdbhabal(a)gmail.com>
>>> wrote:
>>>
>>>> Hello Experts,
>>>>
>>>> I am new to Ovirt community.
>>>>
>>>
>>> Hi, welcome to the oVirt community!
>>>
>>>
>>>
>>>> Apology if this question has asked earlier.
>>>> I just wanted to know that does Ovirt support Citrix Xenserver or not?
>>>>
>>>
>>> Sadly oVirt doesn't support Xen. There was an attempt in the past to
>>> support it, see http://www.ovirt.org/documentation/how-to/xen/ but it
>>> has been abanoned.
>>>
>>> However, we support conversion of VMs from Xen to oVirt (see
>>> http://www.ovirt.org/develop/release-management/features/vir
>>> t/virt-v2v-integration/ )  so you can setup an oVirt datacenter next to
>>> the Xen one and move VMs to oVirt / KVM.
>>>
>>>
>>>
>>>>
>>>> Reagrds
>>>> Santosh.
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> SANDRO BONAZZOLA
>>>
>>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>>>
>>> Red Hat EMEA <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
>
> SANDRO BONAZZOLA
>
> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
>
> Red Hat EMEA <https://www.redhat.com/>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hello Simone and all others,
I have attached to the mail the requested files. If you have any other
inquiry just say it, The failed installation drive would be keep safe until
solving this problem.
Thanks for all in advance
Manuel
2017-05-02 10:55 GMT+01:00 Simone Tiraboschi <stirabos(a)redhat.com>:
> Sure, but first we need to understand what it's happening: in our CI
> process everything is fine so I think it's something specific to your env.
> Could you please share your:
> /var/log/libvirt/qemu/HostedEngine.log
> /var/log/messages
>
> thanks,
> Simone
>
>
> On Tue, May 2, 2017 at 11:38 AM, Manuel Luis Aznar <
> manuel.luis.aznar(a)gmail.com> wrote:
>
>> Ok thankyou.
>>
>> Suppose that this problem probably would be solve in a future release.
>>
>> Thanks,
>> Manuel
>>
>> 2017-05-02 10:35 GMT+01:00 Simone Tiraboschi <stirabos(a)redhat.com>:
>>
>>>
>>>
>>> On Tue, May 2, 2017 at 11:30 AM, Manuel Luis Aznar <
>>> manuel.luis.aznar(a)gmail.com> wrote:
>>>
>>>> Hello there again,
>>>>
>>>> Yes as I say, I have done several clean installations and the VM engine
>>>> sometimes starts without any problem. So Simone any recommendation to make
>>>> the engine VM starts properly?¿
>>>>
>>>> While is installing the HA agent and HA broker are down, would I get
>>>> good result by starting the services myself?¿
>>>>
>>>> Any help from Simone or somebody would be appreciated
>>>> Thanks for all in advance
>>>> Manuel Luis Aznar
>>>>
>>>
>>> I suggest to check libvirt logs.
>>>
>>>
>>>>
>>>> 2017-05-02 7:54 GMT+01:00 Simone Tiraboschi <stirabos(a)redhat.com>:
>>>>
>>>>>
>>>>>
>>>>> On Mon, May 1, 2017 at 3:14 PM, Manuel Luis Aznar <
>>>>> manuel.luis.aznar(a)gmail.com> wrote:
>>>>>
>>>>>> Hello there,
>>>>>>
>>>>>> I have been looking in the internet using google why my installation
>>>>>> of ovirt-hosted-engine is failing.
>>>>>>
>>>>>> I have found this link:
>>>>>>
>>>>>>      https://www.mail-archive.com/users@ovirt.org/msg40864.html
>>>>>> (Hosted engine install failed; vdsm upset about broker)
>>>>>>
>>>>>> It seems to be the same error...
>>>>>>
>>>>>> So to knarra and Jamie Lawrence my question is:
>>>>>>
>>>>>>     Did you manage to discover the problem?? In my instalation I am
>>>>>> using nfs and not gluster...
>>>>>>
>>>>>> I have read the error and is the same error "BrokerConnectionError:
>>>>>> ...". The ovirt-ha-agent and ovirt-ha-broker did not start when the
>>>>>> installation is creating the engine VM...
>>>>>>
>>>>>
>>>>> This is just a false positive: the HA agent and the HA broker are
>>>>> still down so vdsm is complaining but at that point it's absolutely fine by
>>>>> itself since the engine VM still doesn't exists.
>>>>> We already have an open bug to reduce the impact of that message.
>>>>>
>>>>> The real issue is that for some reason the engine VM could not start
>>>>> on your system.
>>>>>
>>>>>
>>>>>>
>>>>>> As I have said before any help would be very appreciated...no matter
>>>>>> whom will give it
>>>>>> Thanks for all in advance
>>>>>> Manuel Luis Aznar
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2017-05-01 12:21 GMT+01:00 Manuel Luis Aznar <
>>>>>> manuel.luis.aznar(a)gmail.com>:
>>>>>>
>>>>>>> Hello Simone and all the comunity,
>>>>>>>
>>>>>>> I have been doing the instalation of ovirt hosted engine again and
>>>>>>> it fails, libvirtd and vdsmd services are failing. They are failing with
>>>>>>> the following errors:
>>>>>>>
>>>>>>>
>>>>>>> libvirt daemon
>>>>>>>
>>>>>>>   libvirtd.service - Virtualization daemon
>>>>>>>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service;
>>>>>>> enabled; vendor preset: enabled)
>>>>>>>   Drop-In: /etc/systemd/system/libvirtd.service.d
>>>>>>>            ââunlimited-core.conf
>>>>>>>    Active: active (running) since lun 2017-05-01 11:43:49 WEST;
>>>>>>> 14min ago
>>>>>>>      Docs: man:libvirtd(8)
>>>>>>>            http://libvirt.org
>>>>>>>  Main PID: 21993 (libvirtd)
>>>>>>>    CGroup: /system.slice/libvirtd.service
>>>>>>>            ââ21993 /usr/sbin/libvirtd --listen
>>>>>>>
>>>>>>> may 01 11:43:49 host1.bajada.es systemd[1]: Starting Virtualization
>>>>>>> daemon...
>>>>>>> may 01 11:43:49 host1.bajada.es systemd[1]: Started Virtualization
>>>>>>> daemon.
>>>>>>> may 01 11:47:45 host1.bajada.es libvirtd[21993]: libvirt version:
>>>>>>> 2.0.0, package: 10.el7_3.5 (CentOS BuildSystem <
>>>>>>> http://bugs.centos.org>, 2017-03-03-02:09:45, c1bm.rdu2.centos.org)
>>>>>>> may 01 11:47:45 host1.bajada.es libvirtd[21993]: hostname:
>>>>>>> host1.bajada.es
>>>>>>> may 01 11:47:45 host1.bajada.es libvirtd[21993]: Falló al conectar
>>>>>>> con el socket de monitor: No existe el proceso
>>>>>>> may 01 11:47:45 host1.bajada.es libvirtd[21993]: internal error:
>>>>>>> process exited while connecting to monitor: /dev/random -device
>>>>>>> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7 -msg
>>>>>>> timestamp=on
>>>>>>>                                                  Could not access
>>>>>>> KVM kernel module: Permission denied
>>>>>>>                                                  failed to
>>>>>>> initialize KVM: Permission denied
>>>>>>>
>>>>>>> vdsm daemon
>>>>>>>
>>>>>>>   vdsmd.service - Virtual Desktop Server Manager
>>>>>>>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled;
>>>>>>> vendor preset: enabled)
>>>>>>>    Active: active (running) since lun 2017-05-01 11:43:51 WEST;
>>>>>>> 15min ago
>>>>>>>  Main PID: 22119 (vdsm)
>>>>>>>    CGroup: /system.slice/vdsmd.service
>>>>>>>            ââ22119 /usr/bin/python2 /usr/share/vdsm/vdsm
>>>>>>>            ââ22612 /usr/libexec/ioprocess --read-pipe-fd 68
>>>>>>> --write-pipe-fd 67 --max-threads 10 --max-queued-requests 10
>>>>>>>            ââ22630 /usr/libexec/ioprocess --read-pipe-fd 76
>>>>>>> --write-pipe-fd 75 --max-threads 10 --max-queued-requests 10
>>>>>>>            ââ22887 /usr/libexec/ioprocess --read-pipe-fd 44
>>>>>>> --write-pipe-fd 43 --max-threads 10 --max-queued-requests 10
>>>>>>>            ââ22893 /usr/libexec/ioprocess --read-pipe-fd 52
>>>>>>> --write-pipe-fd 50 --max-threads 10 --max-queued-requests 10
>>>>>>>
>>>>>>> may 01 11:58:37 host1.bajada.es vdsm[22119]: vdsm
>>>>>>> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to
>>>>>>> connect to broker, the number of errors has exceeded the limit (1)
>>>>>>> may 01 11:58:37 host1.bajada.es vdsm[22119]: vdsm root ERROR failed
>>>>>>> to retrieve Hosted Engine HA info
>>>>>>>                                              Traceback (most recent
>>>>>>> call last):
>>>>>>>                                                File
>>>>>>> "/usr/lib/python2.7/site-packages/vdsm/host/api.py", line 231, in
>>>>>>> _getHaInfo
>>>>>>>                                                  stats =
>>>>>>> instance.get_all_stats()
>>>>>>>                                                File
>>>>>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/client/client.py",
>>>>>>> line 102, in get_all_stats
>>>>>>>                                                  with
>>>>>>> broker.connection(self._retries, self._wait):
>>>>>>>                                                File
>>>>>>> "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
>>>>>>>                                                  return
>>>>>>> self.gen.next()
>>>>>>>                                                File
>>>>>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
>>>>>>> line 99, in connection
>>>>>>>
>>>>>>>  self.connect(retries, wait)
>>>>>>>                                                File
>>>>>>> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
>>>>>>> line 78, in connect
>>>>>>>                                                  raise
>>>>>>> BrokerConnectionError(error_msg)
>>>>>>>                                              BrokerConnectionError:
>>>>>>> Failed to connect to broker, the number of errors has exceeded the limit (1)
>>>>>>> may 01 11:58:52 host1.bajada.es vdsm[22119]: vdsm
>>>>>>> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR Failed to
>>>>>>> connect to broker, the number of errors has exceeded the limit (1)
>>>>>>>
>>>>>>> I have been looking to the ovirt mailing list (and also internet
>>>>>>> looking up in google) but I dont get what is the problem.
>>>>>>>
>>>>>>> I have attached to the mail the vdsm, ovirt-hosted-engine-setup and
>>>>>>> the answer of the installation. In the vdsm.log I got the following error:
>>>>>>>
>>>>>>> libvirtError: internal error: process exited while connecting to
>>>>>>> monitor: /dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x7
>>>>>>> -msg timestamp=on
>>>>>>> Could not access KVM kernel module: Permission denied
>>>>>>> failed to initialize KVM: Permission denied
>>>>>>>
>>>>>>> I have been looking for that error but I dont get anything clear, so
>>>>>>> I will greatly appreciate the help of somebody...
>>>>>>>
>>>>>>> The KVM modules are loaded because if I fired up this "lsmod | grep
>>>>>>> kvm" I get the following:
>>>>>>>
>>>>>>> kvm_intel             170181  0
>>>>>>> kvm                   554609  1 kvm_intel
>>>>>>> irqbypass              13503  1 kvm
>>>>>>>
>>>>>>> Also the group owner of /dev/kvm is:
>>>>>>>
>>>>>>> crw-rw-rw-+ 1 root kvm 10, 232 may  1 01:26 /dev/kvm
>>>>>>>
>>>>>>> Hope somebody could help
>>>>>>> Thanks for all in advance
>>>>>>> Manuel Luis Aznar
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> 2017-03-15 11:58 GMT+00:00 Manuel Luis Aznar <
>>>>>>> manuel.luis.aznar(a)gmail.com>:
>>>>>>>
>>>>>>>> Hello there again,
>>>>>>>>
>>>>>>>> Yes that is correct. I interrupted the setup with Ctrl+C. That was
>>>>>>>> because while I was answering, at the same time, I was looking at this
>>>>>>>> file, and I saw this:
>>>>>>>>
>>>>>>>> FAILED: conflicting vdsm and libvirt-qemu tls configuration.
>>>>>>>> vdsm.conf with ssl=True requires the following changes:
>>>>>>>> libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1
>>>>>>>> qemu.conf: spice_tls=1.
>>>>>>>>
>>>>>>>> So I decided to interrupt the installation, because of the error,
>>>>>>>> edit the files (vdsm.conf and qemu.conf) and then I executed the
>>>>>>>> installation again and it was successful. It seams that the change of the
>>>>>>>> values in that files, in my case, produced a successful instalation.
>>>>>>>>
>>>>>>>> Sorry if my english is hard to understand, now you understand what
>>>>>>>> I did.
>>>>>>>>
>>>>>>>> Any question, remark just go ahead
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Manuel
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> 2017-03-15 11:22 GMT+00:00 Simone Tiraboschi <stirabos(a)redhat.com>:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Mar 15, 2017 at 12:17 PM, Manuel Luis Aznar <
>>>>>>>>> manuel.luis.aznar(a)gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hello Simone,
>>>>>>>>>>
>>>>>>>>>> The quoted lines on your last message are on lines 1238-1245 on
>>>>>>>>>> the attached log ovirt-hosted-engine-setup file.
>>>>>>>>>>
>>>>>>>>>> That file is the first hosted-engine setup. But this log file is
>>>>>>>>>> not the result of a host-engine-setup file. I start with the installation
>>>>>>>>>> and while I was answering the question I was looking to this log file for
>>>>>>>>>> errors and when I found this errors I stopped and done this:
>>>>>>>>>>
>>>>>>>>>> FAILED: conflicting vdsm and libvirt-qemu tls configuration.
>>>>>>>>>> vdsm.conf with ssl=True requires the following changes:
>>>>>>>>>> libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1
>>>>>>>>>> qemu.conf: spice_tls=1.
>>>>>>>>>>
>>>>>>>>>> Previously to this installation I have done several installations
>>>>>>>>>> without revising this log file and always getting failed installations.
>>>>>>>>>>
>>>>>>>>>> Please its important to note that:
>>>>>>>>>>
>>>>>>>>>>      The currently setup which we are talking about was using
>>>>>>>>>> repo "ovirt-release41-pre.rpm". After correcting that two files
>>>>>>>>>> I do the installation and in the end it was completed successfully.
>>>>>>>>>>
>>>>>>>>>> When I have some time I will try to install again using the
>>>>>>>>>> realease repo "ovirt-release41.rpm"
>>>>>>>>>>
>>>>>>>>>> If you have any explanation, question or remark, please go
>>>>>>>>>> ahead...
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> From the attached logs it seams that you voluntary interrupted the
>>>>>>>>> setup from keyboard here:
>>>>>>>>> 2017-03-07 11:23:17 DEBUG otopi.plugins.otopi.dialog.human
>>>>>>>>> dialog.__logString:204 DIALOG:SEND                 iptables was detected on
>>>>>>>>> your computer, do you wish setup to configure it? (Yes, No)[Yes]:
>>>>>>>>> 2017-03-07 12:06:15 DEBUG otopi.context context._executeMethod:142
>>>>>>>>> method exception
>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>   File "/usr/lib/python2.7/site-packages/otopi/context.py", line
>>>>>>>>> 132, in _executeMethod
>>>>>>>>>     method['method']()
>>>>>>>>>   File "/usr/share/ovirt-hosted-engin
>>>>>>>>> e-setup/scripts/../plugins/gr-he-setup/network/firewall_manager.py",
>>>>>>>>> line 157, in _customization
>>>>>>>>>     default=_('Yes'),
>>>>>>>>>   File "/usr/share/otopi/plugins/otopi/dialog/human.py", line
>>>>>>>>> 177, in queryString
>>>>>>>>>     value = self._readline(hidden=hidden)
>>>>>>>>>   File "/usr/lib/python2.7/site-packages/otopi/dialog.py", line
>>>>>>>>> 246, in _readline
>>>>>>>>>     value = self.__input.readline()
>>>>>>>>>   File "/usr/lib/python2.7/site-packages/otopi/main.py", line 53,
>>>>>>>>> in _signal
>>>>>>>>>     raise RuntimeError("SIG%s" % signum)
>>>>>>>>> RuntimeError: SIG2
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I will report back.
>>>>>>>>>> Thanks for all in advance
>>>>>>>>>> Manuel
>>>>>>>>>>
>>>>>>>>>> 2017-03-13 17:29 GMT+00:00 Simone Tiraboschi <stirabos(a)redhat.com
>>>>>>>>>> >:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Mon, Mar 13, 2017 at 4:08 PM, Manuel Luis Aznar <
>>>>>>>>>>> manuel.luis.aznar(a)gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hello to all there again,
>>>>>>>>>>>>
>>>>>>>>>>>> I was having some troubles while installing ovirt Hosted
>>>>>>>>>>>> Engine, I took some look at the hosted engine setup logs while I was
>>>>>>>>>>>> running the hosted-engine --deploy and I found the following in the ovirt
>>>>>>>>>>>> hosted engine setup logs:
>>>>>>>>>>>>
>>>>>>>>>>>> lvm requires configuration
>>>>>>>>>>>> libvirt is not configured for vdsm yet
>>>>>>>>>>>> FAILED: conflicting vdsm and libvirt-qemu tls configuration.
>>>>>>>>>>>> vdsm.conf with ssl=True requires the following changes:
>>>>>>>>>>>> libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1
>>>>>>>>>>>> qemu.conf: spice_tls=1.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> hosted-engine setup is already running vdsm-tool configure
>>>>>>>>>>> --force so it should configure libvirt and qemu for you, not sure why it
>>>>>>>>>>> failed.
>>>>>>>>>>> Could you please attach the logs from the failed
>>>>>>>>>>> hosted-engine-setup run?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> When I saw this I stopped the setup and edited this two files
>>>>>>>>>>>> (vdsm.conf and qemu.conf) set the stated configurations and run the deploy
>>>>>>>>>>>> again. All was fine and I dont have any trouble the installation finished
>>>>>>>>>>>> successfully. This was using ovirt-release41-pre.rpm repo.
>>>>>>>>>>>>
>>>>>>>>>>>> I will be trying the same installation with ovirt-release41.rpm
>>>>>>>>>>>> (when I have time) and I will report what happened.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks for all
>>>>>>>>>>>> Manuel Luis Aznar
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> 2017-03-06 1:31 GMT+00:00 Manuel Luis Aznar <
>>>>>>>>>>>> manuel.luis.aznar(a)gmail.com>:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hey there,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have been loking around, of course as of now the following I
>>>>>>>>>>>>> am going to say I suppose is not anything new to you:
>>>>>>>>>>>>>
>>>>>>>>>>>>> This is the status of libvirtd:
>>>>>>>>>>>>>
>>>>>>>>>>>>> â libvirtd.service - Virtualization daemon
>>>>>>>>>>>>>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service;
>>>>>>>>>>>>> enabled; vendor preset: enabled)
>>>>>>>>>>>>>   Drop-In: /etc/systemd/system/libvirtd.service.d
>>>>>>>>>>>>>            ââunlimited-core.conf
>>>>>>>>>>>>>    Active: active (running) since lun 2017-03-06 01:25:05 WET;
>>>>>>>>>>>>> 1min 37s ago
>>>>>>>>>>>>>      Docs: man:libvirtd(8)
>>>>>>>>>>>>>            http://libvirt.org
>>>>>>>>>>>>>  Main PID: 24350 (libvirtd)
>>>>>>>>>>>>>    CGroup: /system.slice/libvirtd.service
>>>>>>>>>>>>>            ââ24350 /usr/sbin/libvirtd --listen
>>>>>>>>>>>>>
>>>>>>>>>>>>> mar 06 01:25:05 host1.bajada.es systemd[1]: Starting
>>>>>>>>>>>>> Virtualization daemon...
>>>>>>>>>>>>> mar 06 01:25:05 host1.bajada.es systemd[1]: Started
>>>>>>>>>>>>> Virtualization daemon.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> After looking at the state I fire up the VM engine with the
>>>>>>>>>>>>> command "hosted-engine --vm-start" and I got the following:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> VM exists and is down, destroying it
>>>>>>>>>>>>> Machine destroyed
>>>>>>>>>>>>>
>>>>>>>>>>>>> ed786811-0321-431e-be4b-2d03764c1b02
>>>>>>>>>>>>>         Status = WaitForLaunch
>>>>>>>>>>>>>         nicModel = rtl8139,pv
>>>>>>>>>>>>>         statusTime = 4374100040 <(437)%20410-0040>
>>>>>>>>>>>>>         emulatedMachine = pc
>>>>>>>>>>>>>         pid = 0
>>>>>>>>>>>>>         vmName = HostedEngine
>>>>>>>>>>>>>         devices = [{'index': '2', 'iface': 'ide',
>>>>>>>>>>>>> 'specParams': {}, 'readonly': 'true', 'deviceId':
>>>>>>>>>>>>> '506df4eb-e783-4451-a8a6-993fa4dbb381', 'address': {'bus':
>>>>>>>>>>>>> '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'},
>>>>>>>>>>>>> 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'},
>>>>>>>>>>>>> {'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1',
>>>>>>>>>>>>> 'poolID': '00000000-0000-0000-0000-000000000000', 'volumeID':
>>>>>>>>>>>>> '2bc39472-1a4b-4c7d-8ef9-1212182ad802', 'imageID':
>>>>>>>>>>>>> '08288fcf-6b12-4bd1-84d3-259992e7aa6d', 'specParams': {},
>>>>>>>>>>>>> 'readonly': 'false', 'domainID': 'f44afe8d-56f9-4e1e-beee-4daa548dbad8',
>>>>>>>>>>>>> 'optional': 'false', 'deviceId': '08288fcf-6b12-4bd1-84d3-259992e7aa6d',
>>>>>>>>>>>>> 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type':
>>>>>>>>>>>>> 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive',
>>>>>>>>>>>>> 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model':
>>>>>>>>>>>>> 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr':
>>>>>>>>>>>>> '00:16:3e:65:a6:4e', 'linkActive': 'true', 'network': 'ovirtmgmt',
>>>>>>>>>>>>> 'specParams': {}, 'deviceId': '84b82c6c-bcca-4983-82d5-8d1e3ab3811a',
>>>>>>>>>>>>> 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type':
>>>>>>>>>>>>> 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'},
>>>>>>>>>>>>> {'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId':
>>>>>>>>>>>>> '6236af73-8dab-4d14-b950-fb4ad01d4420', 'alias': 'console0'},
>>>>>>>>>>>>> {'device': 'vga', 'alias': 'video0', 'type': 'video'}, {'device': 'virtio',
>>>>>>>>>>>>> 'specParams': {'source': 'random'}, 'model': 'virtio', 'type': 'rng'}]
>>>>>>>>>>>>>         guestDiskMapping = {}
>>>>>>>>>>>>>         vmType = kvm
>>>>>>>>>>>>>         clientIp =
>>>>>>>>>>>>>         displaySecurePort = -1
>>>>>>>>>>>>>         memSize = 4096
>>>>>>>>>>>>>         displayPort = -1
>>>>>>>>>>>>>         cpuType = Broadwell
>>>>>>>>>>>>>         spiceSecureChannels = smain,sdisplay,sinputs,scursor
>>>>>>>>>>>>> ,splayback,srecord,ssmartcard,susbredir
>>>>>>>>>>>>>         smp = 2
>>>>>>>>>>>>>         displayIp = 0
>>>>>>>>>>>>>         display = vnc
>>>>>>>>>>>>>         maxVCpus = 6
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> After that if I look again at the status of libvirtd I obtain:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> â libvirtd.service - Virtualization daemon
>>>>>>>>>>>>>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service;
>>>>>>>>>>>>> enabled; vendor preset: enabled)
>>>>>>>>>>>>>   Drop-In: /etc/systemd/system/libvirtd.service.d
>>>>>>>>>>>>>            ââunlimited-core.conf
>>>>>>>>>>>>>    Active: active (running) since lun 2017-03-06 01:25:05 WET;
>>>>>>>>>>>>> 5min ago
>>>>>>>>>>>>>      Docs: man:libvirtd(8)
>>>>>>>>>>>>>            http://libvirt.org
>>>>>>>>>>>>>  Main PID: 24350 (libvirtd)
>>>>>>>>>>>>>    CGroup: /system.slice/libvirtd.service
>>>>>>>>>>>>>            ââ24350 /usr/sbin/libvirtd --listen
>>>>>>>>>>>>>
>>>>>>>>>>>>> mar 06 01:25:05 host1.bajada.es systemd[1]: Starting
>>>>>>>>>>>>> Virtualization daemon...
>>>>>>>>>>>>> mar 06 01:25:05 host1.bajada.es systemd[1]: Started
>>>>>>>>>>>>> Virtualization daemon.
>>>>>>>>>>>>> mar 06 01:29:39 host1.bajada.es libvirtd[24350]: libvirt
>>>>>>>>>>>>> version: 2.0.0, package: 10.el7_3.5 (CentOS BuildSystem <
>>>>>>>>>>>>> http://bugs.centos.org>, 2017-03-03-02:09:45,
>>>>>>>>>>>>> c1bm.rdu2.centos.org)
>>>>>>>>>>>>> mar 06 01:29:39 host1.bajada.es libvirtd[24350]: hostname:
>>>>>>>>>>>>> host1.bajada.es
>>>>>>>>>>>>> mar 06 01:29:39 host1.bajada.es libvirtd[24350]: Falló al
>>>>>>>>>>>>> conectar con el socket de monitor: No existe el proceso
>>>>>>>>>>>>> mar 06 01:29:39 host1.bajada.es libvirtd[24350]: internal
>>>>>>>>>>>>> error: process exited while connecting to monitor: Could not access KVM
>>>>>>>>>>>>> kernel module: Permission denied
>>>>>>>>>>>>>                                                  failed to
>>>>>>>>>>>>> initialize KVM: Permission denied
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> So the libvirtd is the problem, as i said this is nothing new
>>>>>>>>>>>>> to you of course...
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks again for any help
>>>>>>>>>>>>> Manuel
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> 2017-03-05 18:51 GMT+00:00 Manuel Luis Aznar <
>>>>>>>>>>>>> manuel.luis.aznar(a)gmail.com>:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hey there again,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can you check if you have KVM modules loaded?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     In order to check that I fire up the following command:
>>>>>>>>>>>>>> "lsmod | grep kvm"
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     Result was:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>            kvm_intel              170181  0
>>>>>>>>>>>>>>            kvm                     554609  1 kvm_intel
>>>>>>>>>>>>>>            irqbypass               13503  1 kvm
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also check group owner for "/dev/kvm". I fire this: "ls -la
>>>>>>>>>>>>>> /dev/kvm". The result was:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>       crw-rw-rw-+ 1 root kvm 10, 232 mar  5 03:35 /dev/kvm
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also I check if there were some remain packages pending to
>>>>>>>>>>>>>> install for kvm and qemu and I got:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>     yum install \*kvm\*
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The result is, that the system need to install the following:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Instalando:
>>>>>>>>>>>>>>  centos-release-qemu-ev        noarch
>>>>>>>>>>>>>>  1.0-1.el7                                extras
>>>>>>>>>>>>>>    11 k
>>>>>>>>>>>>>>  qemu-guest-agent                 x86_64
>>>>>>>>>>>>>> 10:2.5.0-3.el7                         base
>>>>>>>>>>>>>>  133 k
>>>>>>>>>>>>>>  qemu-kvm-ev-debuginfo        x86_64
>>>>>>>>>>>>>> 10:2.6.0-28.el7_3.3.1              ovirt-4.0                            12 M
>>>>>>>>>>>>>>  vdsm-hook-faqemu               noarch
>>>>>>>>>>>>>>  4.18.21-1.el7.centos                ovirt-4.0
>>>>>>>>>>>>>>  15 k
>>>>>>>>>>>>>>  vdsm-hook-qemucmdline      noarch
>>>>>>>>>>>>>>  4.18.21-1.el7.centos                ovirt-4.0
>>>>>>>>>>>>>>  11 k
>>>>>>>>>>>>>> Instalando para las dependencias:
>>>>>>>>>>>>>>  centos-release-virt-common        noarch
>>>>>>>>>>>>>> 1-1.el7.centos                                   extras
>>>>>>>>>>>>>>          4.5 k
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Checking libvirtd service status I got:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>  libvirtd.service - Virtualization daemon
>>>>>>>>>>>>>>    Loaded: loaded (/usr/lib/systemd/system/libvirtd.service;
>>>>>>>>>>>>>> enabled; vendor preset: enabled)
>>>>>>>>>>>>>>   Drop-In: /etc/systemd/system/libvirtd.service.d
>>>>>>>>>>>>>>            ââunlimited-core.conf
>>>>>>>>>>>>>>    Active: active (running) since dom 2017-03-05 15:56:11
>>>>>>>>>>>>>> WET; 2h 51min ago
>>>>>>>>>>>>>>      Docs: man:libvirtd(8)
>>>>>>>>>>>>>>            http://libvirt.org
>>>>>>>>>>>>>>  Main PID: 19415 (libvirtd)
>>>>>>>>>>>>>>    CGroup: /system.slice/libvirtd.service
>>>>>>>>>>>>>>            19415 /usr/sbin/libvirtd --listen
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> mar 05 15:56:10 host1.bajada.es systemd[1]: Starting
>>>>>>>>>>>>>> Virtualization daemon...
>>>>>>>>>>>>>> mar 05 15:56:11 host1.bajada.es systemd[1]: Started
>>>>>>>>>>>>>> Virtualization daemon.
>>>>>>>>>>>>>> mar 05 16:00:04 host1.bajada.es libvirtd[19415]: libvirt
>>>>>>>>>>>>>> version: 2.0.0, package: 10.el7_3.5 (CentOS BuildSystem <
>>>>>>>>>>>>>> http://bugs.centos.org>, 2017-03-03-02:09:45,
>>>>>>>>>>>>>> c1bm.rdu2.centos.org)
>>>>>>>>>>>>>> mar 05 16:00:04 host1.bajada.es libvirtd[19415]: hostname:
>>>>>>>>>>>>>> host1.bajada.es
>>>>>>>>>>>>>> mar 05 16:00:04 host1.bajada.es libvirtd[19415]: Failed to
>>>>>>>>>>>>>> connect to the socket monitor: process does not exits
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>          (Fallo al conectar con el socket de monitor: No existe el proceso)
>>>>>>>>>>>>>> mar 05 16:00:04 host1.bajada.es libvirtd[19415]: internal
>>>>>>>>>>>>>> error: process exited while connecting to monitor: Could not access KVM
>>>>>>>>>>>>>> kernel module: Permission denied
>>>>>>>>>>>>>>                                                  failed to
>>>>>>>>>>>>>> initialize KVM: Permission denied
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks for all in advance
>>>>>>>>>>>>>> I will be waiting for you. Any help appreciated
>>>>>>>>>>>>>> Manuel
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 2017-03-05 17:33 GMT+00:00 Artyom Lukianov <
>>>>>>>>>>>>>> alukiano(a)redhat.com>:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I found this one under the vdsm log:
>>>>>>>>>>>>>>> libvirtError: internal error: process exited while
>>>>>>>>>>>>>>> connecting to monitor: Could not access KVM kernel module: Permission denied
>>>>>>>>>>>>>>> failed to initialize KVM: Permission denied
>>>>>>>>>>>>>>> Thread-70::INFO::2017-03-05 16:00:04,325::vm::1330::virt.vm::(setDownStatus)
>>>>>>>>>>>>>>> vmId=`ed786811-0321-431e-be4b-2d03764c1b02`::Changed state
>>>>>>>>>>>>>>> to Down: internal error: process exited while connecting to monitor: Could
>>>>>>>>>>>>>>> not access KVM kernel module: Permission denied
>>>>>>>>>>>>>>> failed to initialize KVM: Permission denied (code=1)
>>>>>>>>>>>>>>> Thread-70::INFO::2017-03-05 16:00:04,325::guestagent::430::virt.vm::(stop)
>>>>>>>>>>>>>>> vmId=`ed786811-0321-431e-be4b-2d03764c1b02`::Stopping
>>>>>>>>>>>>>>> connection
>>>>>>>>>>>>>>> Thread-70::DEBUG::2017-03-05 16:00:04,325::vmchannels::238::vds::(unregister)
>>>>>>>>>>>>>>> Delete fileno 52 from listener.
>>>>>>>>>>>>>>> Thread-70::DEBUG::2017-03-05 16:00:04,325::vmchannels::66::vds::(_unregister_fd)
>>>>>>>>>>>>>>> Failed to unregister FD from epoll (ENOENT): 52
>>>>>>>>>>>>>>> Thread-70::DEBUG::2017-03-05 16:00:04,326::__init__::209::jsonrpc.Notification::(emit)
>>>>>>>>>>>>>>> Sending event {"params": {"ed786811-0321-431e-be4b-2d03764c1b02":
>>>>>>>>>>>>>>> {"status": "Down", "exitReason": 1, "exitMessage": "internal error: process
>>>>>>>>>>>>>>> exited while connecting to monitor: Could not access KVM kernel module:
>>>>>>>>>>>>>>> Permission denied\nfailed to initialize KVM: Permission denied",
>>>>>>>>>>>>>>> "exitCode": 1}, "notify_time": 4339924730}, "jsonrpc": "2.0", "method":
>>>>>>>>>>>>>>> "|virt|VM_status|ed786811-0321-431e-be4b-2d03764c1b02"}
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Can you check if you have KVM modules loaded? Also, check
>>>>>>>>>>>>>>> group owner for "/dev/kvm".
>>>>>>>>>>>>>>> Best Regards
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sat, Mar 4, 2017 at 4:24 PM, Manuel Luis Aznar <
>>>>>>>>>>>>>>> manuel.luis.aznar(a)gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hello there again,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The error on the first email was using the repo
>>>>>>>>>>>>>>>> ovirt-release41.rpm (http://resources.ovirt.org/pu
>>>>>>>>>>>>>>>> b/yum-repo/ovirt-release41.rpm), so as I were getting the
>>>>>>>>>>>>>>>> same error again and again I am currently trying with
>>>>>>>>>>>>>>>> ovirt-release41-snapshot.rpm (http://resources.ovirt.org/pu
>>>>>>>>>>>>>>>> b/yum-repo/ovirt-release41-snapshot.rpm) and the result is
>>>>>>>>>>>>>>>> nearly the same.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> After creating the VM on the installation I got the same
>>>>>>>>>>>>>>>> error with the command "systemctl status vdsmd":
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> mar 04 14:10:19 host1.bajada.es vdsm[20443]: vdsm root
>>>>>>>>>>>>>>>> ERROR failed to retrieve Hosted Engine HA info
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>         Traceback (most recent call last):
>>>>>>>>>>>>>>>>            File "/usr/lib/python2.7/site-packages/vdsm/host/api.py",
>>>>>>>>>>>>>>>> line 231, in _getHaInfo
>>>>>>>>>>>>>>>>               stats = instance.get_all_stats()
>>>>>>>>>>>>>>>>            File "/usr/lib/python2.7/site-packa
>>>>>>>>>>>>>>>> ges/ovirt_hosted_engine_ha/client/client.py", line 102, in
>>>>>>>>>>>>>>>> get_all_stats
>>>>>>>>>>>>>>>>               with broker.connection(self._retries,
>>>>>>>>>>>>>>>> self._wait):
>>>>>>>>>>>>>>>>            File "/usr/lib64/python2.7/contextlib.py", line
>>>>>>>>>>>>>>>> 17, in __enter__
>>>>>>>>>>>>>>>>               return self.gen.next()
>>>>>>>>>>>>>>>>            File "/usr/lib/python2.7/site-packa
>>>>>>>>>>>>>>>> ges/ovirt_hosted_engine_ha/lib/brokerlink.py", line 99, in
>>>>>>>>>>>>>>>> connection
>>>>>>>>>>>>>>>>               self.connect(retries, wait)
>>>>>>>>>>>>>>>>            File "/usr/lib/python2.7/site-packa
>>>>>>>>>>>>>>>> ges/ovirt_hosted_engine_ha/lib/brokerlink.py", line 78, in
>>>>>>>>>>>>>>>> connect
>>>>>>>>>>>>>>>>               raise BrokerConnectionError(error_msg)
>>>>>>>>>>>>>>>>          BrokerConnectionError: Failed to connect to
>>>>>>>>>>>>>>>> broker, the number of errors has exceeded the limit (1)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> mar 04 14:10:34 host1.bajada.es vdsm[20443]: vdsm
>>>>>>>>>>>>>>>> ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink ERROR
>>>>>>>>>>>>>>>> Failed to connect to broker, the number of errors has exceeded the limit (1)
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I have noticed that the ovirt-ha-agent and ovirt-ha-broker
>>>>>>>>>>>>>>>> services was not running. I guess if this have something to do with the
>>>>>>>>>>>>>>>> error in vsmd service log.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But in this case the ovirt-hosted-engine-installation
>>>>>>>>>>>>>>>> prints the vnc connection and I can connect to the engine VM.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thanks for all in advance
>>>>>>>>>>>>>>>> Any help would be appreciated
>>>>>>>>>>>>>>>> Manuel Luis Aznar
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> 2017-03-03 21:48 GMT+00:00 Manuel Luis Aznar <
>>>>>>>>>>>>>>>> manuel.luis.aznar(a)gmail.com>:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Hello there,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I am having some trouble when deploying an oVirt 4.1
>>>>>>>>>>>>>>>>> hosted engine installation.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> When I m just to end the installation and the hosted
>>>>>>>>>>>>>>>>> engine setup script is about to start the Vm engine (appliance) it fails
>>>>>>>>>>>>>>>>> saying "The VM is not powring up".
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> If I double check the service vdsmd i get this error all
>>>>>>>>>>>>>>>>> the time:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> vdsm root ERROR failed to retrieve Hosted Engine HA info
>>>>>>>>>>>>>>>>>  Traceback (most recent call last):
>>>>>>>>>>>>>>>>>      File "/usr/lib/python2.7/site-packages/vdsm/host/api.py",
>>>>>>>>>>>>>>>>> line 231, in _getHaInfo
>>>>>>>>>>>>>>>>>          stats = instance.get_all_stats()
>>>>>>>>>>>>>>>>>      File "/usr/lib/python2.7/site-packa
>>>>>>>>>>>>>>>>> ges/ovirt_hosted_engine_ha/client/client.py", line 102,
>>>>>>>>>>>>>>>>> in get_all_stats
>>>>>>>>>>>>>>>>>          with broker.connection(self._retries,
>>>>>>>>>>>>>>>>> self._wait):
>>>>>>>>>>>>>>>>>      File "/usr/lib64/python2.7/contextlib.py", line 17,
>>>>>>>>>>>>>>>>> in __enter__
>>>>>>>>>>>>>>>>>          return self.gen.next()
>>>>>>>>>>>>>>>>>      File "/usr/lib/python2.7/site-packa
>>>>>>>>>>>>>>>>> ges/ovirt_hosted_engine_ha/lib/brokerlink.py", line 99,
>>>>>>>>>>>>>>>>> in connection
>>>>>>>>>>>>>>>>>          self.connect(retries, wait)
>>>>>>>>>>>>>>>>>      File "/usr/lib/python2.7/site-packa
>>>>>>>>>>>>>>>>> ges/ovirt_hosted_engine_ha/lib/brokerlink.py", line 78,
>>>>>>>>>>>>>>>>> in connect
>>>>>>>>>>>>>>>>>          raise BrokerConnectionError(error_msg)
>>>>>>>>>>>>>>>>> BrokerConnectionError: Failed to connect to broker, the
>>>>>>>>>>>>>>>>> number of errors has exceeded the limit (1)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Did anyone have experimented the same problem?¿? Any hint
>>>>>>>>>>>>>>>>> on How to solved it?¿? I have tried several times with clean installations
>>>>>>>>>>>>>>>>> and always getting the same...
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> The host where I am trying to do the installation have
>>>>>>>>>>>>>>>>> CentOS 7...
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Thanks for all in advance
>>>>>>>>>>>>>>>>> Will be waiting for any hint to see what I am doing
>>>>>>>>>>>>>>>>> wrong...
>>>>>>>>>>>>>>>>> Manuel Luis Aznar
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>>>>>> Users(a)ovirt.org
>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>> Users(a)ovirt.org
>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            5
                            
                          
                          
                            
    
                          
                        
                    
                    
                        When I export a user I find values like:
  <department></department>
  <domain_entry_id>39323336363566612D373333622D346532612D396530632D316630396536643634636432</domain_entry_id>
  <email></email>
  <last_name></last_name>
  <name>admin</name>
  <namespace>*</namespace>
  <principal>admin</principal>
  <user_name>admin@internal-authz</user_name>
  <domain href="/ovirt-engine/api/domains/696E7465726E616C2D617574687A" id="696E7465726E616C2D617574687A">
    <name>internal-authz</name>
  </domain>
  <permissions href="/ovirt-engine/api/users/0000001a-001a-001a-001a-0000000003b4/permissions"/>
  <roles href="/ovirt-engine/api/users/0000001a-001a-001a-001a-0000000003b4/roles"/>
  <ssh_public_keys href="/ovirt-engine/api/users/0000001a-001a-001a-001a-0000000003b4/sshpublickeys"/>
  <tags href="/ovirt-engine/api/users/0000001a-001a-001a-001a-0000000003b4/tags"/>
They are the same that one defined from the type in sdk (http://ovirt.github.io/ovirt-engine-sdk/master/types.m.html#ovirtsdk4.types…)
If I look in http://www.ovirt.org/documentation/admin-guide/appe-Using_Search_Bookmarks_…, I see fields like pool, group that I don't map to fields in the type.
In the search bar in the UI, I also see fields like login, directory or type. The mapping is less obvious, even if I can guess that login maps to principal.
But I wonder why such name discrepancy exists and if they are documented somewhere.
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        I'm using kerberos authentication in ovirt for the URL /sso/oauth/token-http-auth, but kerberos is done in Apache using auth_gssapi_module and it's quite slow, about 6s for a request.
I'm trying to understand if it's apache or ovirt-engine that are slow. Is there a way to get response time metered for http requests inside ovirt instead of seen from apache ?
                    
                  
                  
                          
                            
                            4
                            
                          
                          
                            
                            11
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
Just updated last week ovirt hosts from 4.0.6 to 4.1.1 and I noticed 
ovs-vswitchd
high cpu usage, around 100%.
My cluster switch type is in legacy mode, so I decided to stop 
ovs-vswitchd process
on each ovirt host.
This morning, when trying to run a new VM and I've got the following 
error message :
Exit message: (21, 'Executing commands failed: ovs-vsctl: 
unix:/var/run/openvswitch/db.sock: database connection failed (No such 
file or directory)')
Even if the cluster is in legacy mode, it seems that ovs-vswitchd needs 
to be up and running for vdsm.
So, how can reduce the cpu usage of ovs-vswitchd process ?
Regards,
Arnaud
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    12 May '17
                    
                        The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.2 for testing, as of May 12th, 2017
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the second release candidate of the second in a series of
stabilization updates to the 4.1 series.
4.1.2 brings more than 25 enhancements and more than 220 bugfixes,
including more than 90 high or urgent
severity fixes, on top of oVirt 4.1 series
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* oVirt Node 4.1
* Fedora 24 (tech preview)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Live has been already built [4]
- oVirt Node has been already built [4]
Additional Resources:
* Read more about the oVirt 4.1.2 release highlights:
http://www.ovirt.org/release/4.1.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.2/
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
-- 
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
Using Python-SDK4, is there a way to shutdown a machine with a specific 
message?
In code I just see this definition:
     def shutdown(
         self,
         async=None,
         headers=None,
         query=None,
     ):
I wonder if some header allows specifying the message here.
Thanks.
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        --_000_AM2PR09MB0530AB0B406B15D739E09E81B0EF0AM2PR09MB0530eurp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
I am running a ovirt engine and ovirt host on a single machine for testing =
purpose. I trying to convert and move Debian virtualbox vm which is created=
 using vagrant into ovirt engine as template.
I tried to convert using virt-v2v as http://libguestfs.org/virt-v2v.1.html.=
 However, I get error as (Debian/ Linux cannot be converted).
I have exported the vm from virtualbox and have the image as .ova. Is there=
 any way to migrate?
Thanks
--_000_AM2PR09MB0530AB0B406B15D739E09E81B0EF0AM2PR09MB0530eurp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style>
</head>
<body dir=3D"ltr">
<div id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Arial,Helvetica,sans-serif;" dir=3D"ltr">
<p>Hi,</p>
<p><br>
</p>
<p>I am running a ovirt engine and ovirt host on a single machine for testi=
ng purpose. I trying to convert and move Debian virtualbox vm whi=
ch is created using vagrant into ovirt engine as template. </p>
<p><br>
</p>
<p>I tried to convert using virt-v2v as <a href=3D"http://libguestfs.o=
rg/virt-v2v.1.html" class=3D"OWAAutoLink" id=3D"LPlnk660180" previewremoved=
=3D"true">http://libguestfs.org/virt-v2v.1.html</a>. However, I get error a=
s (Debian/ Linux cannot be converted).</p>
<p><br>
</p>
<p>I have exported the vm from virtualbox and have the image as .ova. =
Is there any way to migrate?</p>
<p><br>
</p>
<p>Thanks </p>
<br>
<div id=3D"Signature">
<div>
<div>
<div><b><i><font size=3D"2" style=3D"background-color:rgb(255,255,255)"></f=
ont></i></b></div>
</div>
</div>
</div>
</div>
</body>
</html>
--_000_AM2PR09MB0530AB0B406B15D739E09E81B0EF0AM2PR09MB0530eurp_--
                    
                  
                  
                          
                            
                            4
                            
                          
                          
                            
                            3
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
    First of all, sorry for the rookie question. Because of my network
setup my compute nodes (CentOS 7.3) are in a network segment without
IPv6 routing, however IPv6 is enabled on the hosts (just the default
setup in CentOS 7).
    This leads to problems when I try to install or update packages from
the repos since some of them are already on IPv6 and the compute nodes
try to access them over IPv6 and fail.
    I am considering disabling IPv6 on the compute nodes. Right now I am
using legacy (linux bridge) networking. Will disabling IPv6 on the
compute nodes prevent using IPv6 on the guest VMs?
    Thanks!
-- 
Eduardo Mayoral Jimeno (emayoral(a)arsys.es)
Administrador de sistemas. Departamento de Plataformas. Arsys internet.
+34 941 620 145 ext. 5153
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    11 May '17
                    
                        ------=_Part_11446852_617310615.1494517429918
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
After rebooting the manager VM, hosts are connecting/non responsive and dat=
a domains inactive. Here are the engine and vdsmd logs. Any ideas ?=20
Engine logs :=20
2017-05-11 17:28:09,302 WARN [org.ovirt.engine.core.dal.dbbroker.auditlogha=
ndling.AuditLogDirector] (DefaultQuartzScheduler5) [55f1aab5] Correlation I=
D: null, Call Stack: null, Custom Event ID: -1, Message: Failed to verify P=
ower Management configuration for Host rhvserv-05.=20
2017-05-11 17:28:09,346 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCom=
mand] (DefaultQuartzScheduler5) [48bc69cd] Running command: HandleVdsVersio=
nCommand internal: true. Entities affected : ID: 04565f10-9abf-4709-9445-9d=
c6ed97e136 Type: VDS=20
2017-05-11 17:28:09,349 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (=
org.ovirt.thread.pool-6-thread-27) [639977e4] Host 'rhvserv-05' is not resp=
onding.=20
2017-05-11 17:28:09,364 WARN [org.ovirt.engine.core.dal.dbbroker.auditlogha=
ndling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-27) [639977e4] Cor=
relation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host rhv=
serv-05 is not responding. Host cannot be fenced automatically because powe=
r management for the host is disabled.=20
2017-05-11 17:28:11,299 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler3) [c0e6a2e] Command 'GetCa=
pabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamete=
rsBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916',=
 vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution=
 failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connectio=
n failed=20
2017-05-11 17:28:11,299 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler3) [c0e6a2e] Failure to refresh host =
'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionEx=
ception: Connection failed=20
2017-05-11 17:28:11,327 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:12,484 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get=
HardwareInfoVDSCommand] (DefaultQuartzScheduler3) [c0e6a2e] START, GetHardw=
areInfoVDSCommand(HostName =3D rhvserv-05, VdsIdAndVdsVDSCommandParametersB=
ase:{runAsync=3D'true', hostId=3D'04565f10-9abf-4709-9445-9dc6ed97e136', vd=
s=3D'Host[rhvserv-05,04565f10-9abf-4709-9445-9dc6ed97e136]'}), log id: f807=
ece=20
2017-05-11 17:28:12,487 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Get=
HardwareInfoVDSCommand] (DefaultQuartzScheduler3) [c0e6a2e] FINISH, GetHard=
wareInfoVDSCommand, log id: f807ece=20
2017-05-11 17:28:12,532 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOr=
ClusterChangedCommand] (DefaultQuartzScheduler3) [4e882ea0] Running command=
: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affecte=
d : ID: 04565f10-9abf-4709-9445-9dc6ed97e136 Type: VDS=20
2017-05-11 17:28:12,539 INFO [org.ovirt.engine.core.bll.InitVdsOnUpCommand]=
 (DefaultQuartzScheduler3) [75f25b35] Running command: InitVdsOnUpCommand i=
nternal: true. Entities affected : ID: 58f8df36-019f-02bc-00e7-000000000023=
 Type: StoragePool=20
2017-05-11 17:28:12,545 INFO [org.ovirt.engine.core.bll.storage.pool.Connec=
tHostToStoragePoolServersCommand] (DefaultQuartzScheduler3) [46cc3f58] Runn=
ing command: ConnectHostToStoragePoolServersCommand internal: true. Entitie=
s affected : ID: 58f8df36-019f-02bc-00e7-000000000023 Type: StoragePool=20
2017-05-11 17:28:12,556 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Con=
nectStorageServerVDSCommand] (DefaultQuartzScheduler3) [46cc3f58] START, Co=
nnectStorageServerVDSCommand(HostName =3D rhvserv-05, StorageServerConnecti=
onManagementVDSParameters:{runAsync=3D'true', hostId=3D'04565f10-9abf-4709-=
9445-9dc6ed97e136', storagePoolId=3D'58f8df36-019f-02bc-00e7-000000000023',=
 storageType=3D'ISCSI', connectionList=3D'[StorageServerConnections:{id=3D'=
10c0528b-f08d-4d1d-8c63-8a05fd9d58b9', connection=3D'10.35.21.1', iqn=3D'iq=
n.1984-05.com.dell:powervault.md3200i.6782bcb00073e332000000004edde164', vf=
sType=3D'null', mountOptions=3D'null', nfsVersion=3D'null', nfsRetrans=3D'n=
ull', nfsTimeo=3D'null', iface=3D'null', netIfaceName=3D'null'}]'}), log id=
: 1beb27b6=20
2017-05-11 17:28:13,031 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Con=
nectStorageServerVDSCommand] (DefaultQuartzScheduler3) [46cc3f58] FINISH, C=
onnectStorageServerVDSCommand, return: {10c0528b-f08d-4d1d-8c63-8a05fd9d58b=
9=3D0}, log id: 1beb27b6=20
2017-05-11 17:28:13,032 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Con=
nectStorageServerVDSCommand] (DefaultQuartzScheduler3) [46cc3f58] START, Co=
nnectStorageServerVDSCommand(HostName =3D rhvserv-05, StorageServerConnecti=
onManagementVDSParameters:{runAsync=3D'true', hostId=3D'04565f10-9abf-4709-=
9445-9dc6ed97e136', storagePoolId=3D'58f8df36-019f-02bc-00e7-000000000023',=
 storageType=3D'NFS', connectionList=3D'[StorageServerConnections:{id=3D'e6=
04d0d2-0810-4c25-b9ed-610f9923cb1a', connection=3D'nfsserv-01:/nfs/export',=
 iqn=3D'null', vfsType=3D'null', mountOptions=3D'null', nfsVersion=3D'V3', =
nfsRetrans=3D'null', nfsTimeo=3D'null', iface=3D'null', netIfaceName=3D'nul=
l'}, StorageServerConnections:{id=3D'2c1aba4b-786b-4c76-a02f-d9e09e8afff4',=
 connection=3D'nfsserv-01:/nfs/iso', iqn=3D'null', vfsType=3D'null', mountO=
ptions=3D'null', nfsVersion=3D'V3', nfsRetrans=3D'null', nfsTimeo=3D'null',=
 iface=3D'null', netIfaceName=3D'null'}]'}), log id: 15c54cd7=20
2017-05-11 17:28:14,301 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:14,304 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler1) [62006eaa] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:14,304 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler1) [62006eaa] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:14,334 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:14,334 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:17,308 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:17,336 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:17,340 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D=
'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:28:17,340 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:28:20,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler1) [62006eaa] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:20,315 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler1) [62006eaa] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:20,343 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:23,316 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:23,325 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D=
'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:28:23,325 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:28:23,348 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:23,348 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:26,329 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:26,350 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:26,354 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:26,354 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:29,335 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D=
'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:28:29,335 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:28:29,356 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:32,337 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:32,340 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler10) [] Command 'GetCapabili=
tiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase=
:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=
=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution fai=
led: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection fa=
iled=20
2017-05-11 17:28:32,340 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvse=
rv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExceptio=
n: Connection failed=20
2017-05-11 17:28:32,362 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler9) [4cf3eed0] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:32,362 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler9) [4cf3eed0] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:35,342 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:35,370 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:38,349 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:38,349 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:38,374 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D=
'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:28:38,375 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:28:41,352 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:41,354 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler9) [4cf3eed0] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:41,354 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler9) [4cf3eed0] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:41,377 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:41,380 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:41,380 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:44,358 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:44,382 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:47,362 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler1) [62006eaa] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:47,363 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler1) [62006eaa] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:47,388 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:47,388 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:50,365 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:50,368 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler5) [48bc69cd] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:50,368 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler5) [48bc69cd] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:50,390 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:50,394 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler2) [590af365] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:50,394 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler2) [590af365] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:53,371 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:53,396 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:56,379 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler10) [] Command 'GetCapabili=
tiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase=
:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=
=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution fai=
led: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection fa=
iled=20
2017-05-11 17:28:56,379 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvse=
rv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExceptio=
n: Connection failed=20
2017-05-11 17:28:56,402 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler5) [48bc69cd] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:28:56,403 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler5) [48bc69cd] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:28:56,488 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tAllVmStatsVDSCommand] (DefaultQuartzScheduler6) [] Command 'GetAllVmStatsV=
DSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:{run=
Async=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D'Hos=
t[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed: VD=
SGenericException: VDSNetworkException: Vds timeout occured=20
2017-05-11 17:28:56,488 INFO [org.ovirt.engine.core.vdsbroker.monitoring.Po=
llVmStatsRefresher] (DefaultQuartzScheduler6) [] Failed to fetch vms info f=
or host 'rhvserv-04' - skipping VMs monitoring.=20
2017-05-11 17:28:59,382 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:28:59,384 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler10) [] Command 'GetCapabili=
tiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase=
:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=
=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution fai=
led: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection fa=
iled=20
2017-05-11 17:28:59,384 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvse=
rv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExceptio=
n: Connection failed=20
2017-05-11 17:28:59,404 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:28:59,408 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler6) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D=
'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:28:59,408 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler6) [] Failure to refresh host 'rhvser=
v-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:29:02,386 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:29:02,409 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:29:02,490 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tAllVmStatsVDSCommand] (DefaultQuartzScheduler8) [56fb4c82] Command 'GetAll=
VmStatsVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersB=
ase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vd=
s=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution fa=
iled: VDSGenericException: VDSNetworkException: Vds timeout occured=20
2017-05-11 17:29:02,490 INFO [org.ovirt.engine.core.vdsbroker.monitoring.Po=
llVmStatsRefresher] (DefaultQuartzScheduler8) [56fb4c82] Failed to fetch vm=
s info for host 'rhvserv-03' - skipping VMs monitoring.=20
2017-05-11 17:29:05,393 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D=
'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:29:05,393 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:29:05,415 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler10) [] Command 'GetCapabili=
tiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase=
:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=
=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution fai=
led: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection fa=
iled=20
2017-05-11 17:29:05,415 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvse=
rv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExceptio=
n: Connection failed=20
2017-05-11 17:29:08,395 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:29:08,398 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler8) [56fb4c82] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916'=
, vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:29:08,398 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler8) [56fb4c82] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:29:08,417 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:29:08,420 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler2) [590af365] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:29:08,420 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler2) [590af365] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:29:11,402 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:29:11,423 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:29:14,409 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D=
'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:29:14,409 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:29:14,430 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler5) [48bc69cd] Command 'GetC=
apabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParamet=
ersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4'=
, vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' executio=
n failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed=20
2017-05-11 17:29:14,430 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler5) [48bc69cd] Failure to refresh host=
 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed=20
2017-05-11 17:29:17,411 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
2017-05-11 17:29:17,414 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler6) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D=
'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:29:17,414 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler6) [] Failure to refresh host 'rhvser=
v-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:29:17,432 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.1=
68.93.214=20
2017-05-11 17:29:17,436 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.Ge=
tCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilit=
iesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:=
{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D=
'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed=
: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection faile=
d=20
2017-05-11 17:29:17,436 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvser=
v-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed=20
2017-05-11 17:29:17,491 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.Reacto=
rClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.1=
68.93.213=20
vdsmd logs=20
Thread-113::ERROR::2017-05-11 17:41:33,734::sdc::146::Storage.StorageDomain=
Cache::(_findDomain) domain 5b978dda-d1ef-46fe-9996-20aee42cf303 not found=
=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain=20
dom =3D findMethod(sdUUID)=20
File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain=20
raise se.StorageDomainDoesNotExist(sdUUID)=20
StorageDomainDoesNotExist: Storage domain does not exist: (u'5b978dda-d1ef-=
46fe-9996-20aee42cf303',)=20
Thread-113::ERROR::2017-05-11 17:41:33,735::monitor::328::Storage.Monitor::=
(_setupLoop) Setting up monitor for 5b978dda-d1ef-46fe-9996-20aee42cf303 fa=
iled=20
Traceback (most recent call last):=20
File "/usr/share/vdsm/storage/monitor.py", line 325, in _setupLoop=20
self._setupMonitor()=20
File "/usr/share/vdsm/storage/monitor.py", line 348, in _setupMonitor=20
self._produceDomain()=20
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 405, in wrapper=
=20
value =3D meth(self, *a, **kw)=20
File "/usr/share/vdsm/storage/monitor.py", line 366, in _produceDomain=20
self.domain =3D sdCache.produce(self.sdUUID)=20
File "/usr/share/vdsm/storage/sdc.py", line 101, in produce=20
domain.getRealDomain()=20
File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain=20
return self._cache._realProduce(self._sdUUID)=20
File "/usr/share/vdsm/storage/sdc.py", line 125, in _realProduce=20
domain =3D self._findDomain(sdUUID)=20
File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain=20
dom =3D findMethod(sdUUID)=20
File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain=20
raise se.StorageDomainDoesNotExist(sdUUID)=20
StorageDomainDoesNotExist: Storage domain does not exist: (u'5b978dda-d1ef-=
46fe-9996-20aee42cf303',)=20
Thread-112::DEBUG::2017-05-11 17:41:33,783::lvm::288::Storage.Misc.excCmd::=
(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0=20
Thread-112::INFO::2017-05-11 17:41:33,783::sd::604::Storage.StorageDomain::=
(_registerResourceNamespaces) Resource namespace edd229cb-b72f-4988-8c10-d8=
3c84ef4a8a_imageNS already registered=20
Thread-112::INFO::2017-05-11 17:41:33,783::sd::612::Storage.StorageDomain::=
(_registerResourceNamespaces) Resource namespace edd229cb-b72f-4988-8c10-d8=
3c84ef4a8a_volumeNS already registered=20
Thread-112::INFO::2017-05-11 17:41:33,783::blockSD::846::Storage.StorageDom=
ain::(_registerResourceNamespaces) Resource namespace edd229cb-b72f-4988-8c=
10-d83c84ef4a8a_lvmActivationNS already registered=20
Thread-112::DEBUG::2017-05-11 17:41:33,784::lvm::288::Storage.Misc.excCmd::=
(cmd) /usr/bin/taskset --cpu-list 0-15 /usr/bin/sudo -n /usr/sbin/lvm vgck =
--config ' devices { preferred_names =3D ["^/dev/mapper/"] ignore_suspended=
_devices=3D1 write_cache_state=3D0 disable_after_error_count=3D3 filter =3D=
 [ '\''a|/dev/mapper/36782bcb00073e33200002d3858fa1a81|'\'', '\''r|.*|'\'' =
] } global { locking_type=3D1 prioritise_write_locks=3D1 wait_for_locks=3D1=
 use_lvmetad=3D0 } backup { retain_min =3D 50 retain_days =3D 0 } ' edd229c=
b-b72f-4988-8c10-d83c84ef4a8a (cwd None)=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,792::lvm::288::Storage.Misc.=
excCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0=20
jsonrpc.Executor/6::INFO::2017-05-11 17:41:33,792::logUtils::52::dispatcher=
::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stat=
s': {'mdasize': '134217728', 'mdathreshold': True, 'mdavalid': True, 'diskf=
ree': '498484641792', 'disktotal': '557943095296', 'mdafree': '67102208'}}=
=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::task::1193::Storage.Tas=
kManager.Task::(prepare) Task=3D`234cbec4-a422-47c1-a50a-243526aeddc7`::fin=
ished: {'stats': {'mdasize': '134217728', 'mdathreshold': True, 'mdavalid':=
 True, 'diskfree': '498484641792', 'disktotal': '557943095296', 'mdafree': =
'67102208'}}=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::task::597::Storage.Task=
Manager.Task::(_updateState) Task=3D`234cbec4-a422-47c1-a50a-243526aeddc7`:=
:moving from state preparing -> state finished=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::952::S=
torage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} res=
ources {u'Storage.595de2cf-89ba-407b-aae5-d0a7a0656ba1': < ResourceRef 'Sto=
rage.595de2cf-89ba-407b-aae5-d0a7a0656ba1', isValid: 'True' obj: 'None'>}=
=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::989::S=
torage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::628::S=
torage.ResourceManager::(releaseResource) Trying to release resource 'Stora=
ge.595de2cf-89ba-407b-aae5-d0a7a0656ba1'=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::647::S=
torage.ResourceManager::(releaseResource) Released resource 'Storage.595de2=
cf-89ba-407b-aae5-d0a7a0656ba1' (0 active users)=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::653::S=
torage.ResourceManager::(releaseResource) Resource 'Storage.595de2cf-89ba-4=
07b-aae5-d0a7a0656ba1' is free, finding out if anyone is waiting for it.=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::661::S=
torage.ResourceManager::(releaseResource) No one is waiting for resource 'S=
torage.595de2cf-89ba-407b-aae5-d0a7a0656ba1', Clearing records.=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::task::995::Storage.Task=
Manager.Task::(_decref) Task=3D`234cbec4-a422-47c1-a50a-243526aeddc7`::ref =
0 aborting False=20
jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,794::__init__::555::jsonrpc.=
JsonRpcServer::(_handle_request) Return 'StorageDomain.getStats' in bridge =
with {'mdasize': '134217728', 'mdathreshold': True, 'mdavalid': True, 'disk=
free': '498484641792', 'disktotal': '557943095296', 'mdafree': '67102208'}=
=20
jsonrpc.Executor/6::INFO::2017-05-11 17:41:33,794::__init__::513::jsonrpc.J=
sonRpcServer::(_serveRequest) RPC call StorageDomain.getStats succeeded in =
1.05 seconds=20
JsonRpc (StompReactor)::ERROR::2017-05-11 17:41:33,816::betterAsyncore::113=
::vds.dispatcher::(recv) SSL error during reading data: unexpected eof=20
Thread-112::DEBUG::2017-05-11 17:41:33,846::lvm::288::Storage.Misc.excCmd::=
(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0=20
Thread-12::DEBUG::2017-05-11 17:41:34,913::check::296::storage.check::(_sta=
rt_process) START check '/dev/595de2cf-89ba-407b-aae5-d0a7a0656ba1/metadata=
' cmd=3D['/usr/bin/taskset', '--cpu-list', '0-15', '/usr/bin/dd', 'if=3D/de=
v/595de2cf-89ba-407b-aae5-d0a7a0656ba1/metadata', 'of=3D/dev/null', 'bs=3D4=
096', 'count=3D1', 'iflag=3Ddirect'] delay=3D0.00=20
Thread-12::DEBUG::2017-05-11 17:41:34,954::asyncevent::564::storage.asyncev=
ent::(reap) Process <cpopen.CPopen object at 0x1b15b90> terminated (count=
=3D1)=20
Thread-12::DEBUG::2017-05-11 17:41:34,954::check::327::storage.check::(_che=
ck_completed) FINISH check '/dev/595de2cf-89ba-407b-aae5-d0a7a0656ba1/metad=
ata' rc=3D0 err=3Dbytearray(b'1+0 records in\n1+0 records out\n4096 bytes (=
4.1 kB) copied, 0.000537649 s, 7.6 MB/s\n') elapsed=3D0.05=20
Reactor thread::INFO::2017-05-11 17:41:35,823::protocoldetector::76::Protoc=
olDetector.AcceptorImpl::(handle_accept) Accepted connection from ::1:49348=
=20
Reactor thread::DEBUG::2017-05-11 17:41:35,827::protocoldetector::92::Proto=
colDetector.Detector::(__init__) Using required_size=3D11=20
Reactor thread::INFO::2017-05-11 17:41:35,828::protocoldetector::128::Proto=
colDetector.Detector::(handle_read) Detected protocol stomp from ::1:49348=
=20
Reactor thread::INFO::2017-05-11 17:41:35,828::stompreactor::101::Broker.St=
ompAdapter::(_cmd_connect) Processing CONNECT request=20
Reactor thread::DEBUG::2017-05-11 17:41:35,829::stompreactor::492::protocol=
detector.StompDetector::(handle_socket) Stomp detected from ('::1', 49348)=
=20
JsonRpc (StompReactor)::INFO::2017-05-11 17:41:35,829::stompreactor::128::B=
roker.StompAdapter::(_cmd_subscribe) Subscribe command received=20
------=_Part_11446852_617310615.1494517429918
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>After rebooting the manager VM, =
 hosts are connecting/non responsive and data domains inactive. Here a=
re the engine and vdsmd logs. Any ideas ?<br></div><div><br></div><div><br>=
</div><div><br></div><div><br></div><div>Engine logs : <br></div><div><br><=
/div><div><br></div><div><br></div><div>2017-05-11 17:28:09,302 WARN  =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (Def=
aultQuartzScheduler5) [55f1aab5] Correlation ID: null, Call Stack: null, Cu=
stom Event ID: -1, Message: Failed to verify Power Management configuration=
 for Host rhvserv-05.<br>2017-05-11 17:28:09,346 INFO  [org.ovirt.engi=
ne.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler5) [48bc69cd] R=
unning command: HandleVdsVersionCommand internal: true. Entities affected :=
  ID: 04565f10-9abf-4709-9445-9dc6ed97e136 Type: VDS<br>2017-05-11 17:=
28:09,349 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] (org.ovir=
t.thread.pool-6-thread-27) [639977e4] Host 'rhvserv-05' is not responding.<=
br>2017-05-11 17:28:09,364 WARN  [org.ovirt.engine.core.dal.dbbroker.a=
uditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-27) [6399=
77e4] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:=
 Host rhvserv-05 is not responding. Host cannot be fenced automatically bec=
ause power management for the host is disabled.<br>2017-05-11 17:28:11,299 =
ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand]=
 (DefaultQuartzScheduler3) [c0e6a2e] Command 'GetCapabilitiesVDSCommand(Hos=
tName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true=
', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhvserv-03,=
4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.ovirt.vdsm.=
jsonrpc.client.ClientConnectionException: Connection failed<br>2017-05-11 1=
7:28:11,299 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitorin=
g] (DefaultQuartzScheduler3) [c0e6a2e] Failure to refresh host 'rhvserv-03'=
 runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Con=
nection failed<br>2017-05-11 17:28:11,327 INFO  [org.ovirt.vdsm.jsonrp=
c.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvse=
rv-04.mydomain.com/192.168.93.214<br>2017-05-11 17:28:12,484 INFO  [or=
g.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (Default=
QuartzScheduler3) [c0e6a2e] START, GetHardwareInfoVDSCommand(HostName =3D r=
hvserv-05, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=
=3D'04565f10-9abf-4709-9445-9dc6ed97e136', vds=3D'Host[rhvserv-05,04565f10-=
9abf-4709-9445-9dc6ed97e136]'}), log id: f807ece<br>2017-05-11 17:28:12,487=
 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCo=
mmand] (DefaultQuartzScheduler3) [c0e6a2e] FINISH, GetHardwareInfoVDSComman=
d, log id: f807ece<br>2017-05-11 17:28:12,532 INFO  [org.ovirt.engine.=
core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler3=
) [4e882ea0] Running command: HandleVdsCpuFlagsOrClusterChangedCommand inte=
rnal: true. Entities affected :  ID: 04565f10-9abf-4709-9445-9dc6ed97e=
136 Type: VDS<br>2017-05-11 17:28:12,539 INFO  [org.ovirt.engine.core.=
bll.InitVdsOnUpCommand] (DefaultQuartzScheduler3) [75f25b35] Running comman=
d: InitVdsOnUpCommand internal: true. Entities affected :  ID: 58f8df3=
6-019f-02bc-00e7-000000000023 Type: StoragePool<br>2017-05-11 17:28:12,545 =
INFO  [org.ovirt.engine.core.bll.storage.pool.ConnectHostToStoragePool=
ServersCommand] (DefaultQuartzScheduler3) [46cc3f58] Running command: Conne=
ctHostToStoragePoolServersCommand internal: true. Entities affected : =
 ID: 58f8df36-019f-02bc-00e7-000000000023 Type: StoragePool<br>2017-05-11 1=
7:28:12,556 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectSt=
orageServerVDSCommand] (DefaultQuartzScheduler3) [46cc3f58] START, ConnectS=
torageServerVDSCommand(HostName =3D rhvserv-05, StorageServerConnectionMana=
gementVDSParameters:{runAsync=3D'true', hostId=3D'04565f10-9abf-4709-9445-9=
dc6ed97e136', storagePoolId=3D'58f8df36-019f-02bc-00e7-000000000023', stora=
geType=3D'ISCSI', connectionList=3D'[StorageServerConnections:{id=3D'10c052=
8b-f08d-4d1d-8c63-8a05fd9d58b9', connection=3D'10.35.21.1', iqn=3D'iqn.1984=
-05.com.dell:powervault.md3200i.6782bcb00073e332000000004edde164', vfsType=
=3D'null', mountOptions=3D'null', nfsVersion=3D'null', nfsRetrans=3D'null',=
 nfsTimeo=3D'null', iface=3D'null', netIfaceName=3D'null'}]'}), log id: 1be=
b27b6<br>2017-05-11 17:28:13,031 INFO  [org.ovirt.engine.core.vdsbroke=
r.vdsbroker.ConnectStorageServerVDSCommand] (DefaultQuartzScheduler3) [46cc=
3f58] FINISH, ConnectStorageServerVDSCommand, return: {10c0528b-f08d-4d1d-8=
c63-8a05fd9d58b9=3D0}, log id: 1beb27b6<br>2017-05-11 17:28:13,032 INFO&nbs=
p; [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSComman=
d] (DefaultQuartzScheduler3) [46cc3f58] START, ConnectStorageServerVDSComma=
nd(HostName =3D rhvserv-05, StorageServerConnectionManagementVDSParameters:=
{runAsync=3D'true', hostId=3D'04565f10-9abf-4709-9445-9dc6ed97e136', storag=
ePoolId=3D'58f8df36-019f-02bc-00e7-000000000023', storageType=3D'NFS', conn=
ectionList=3D'[StorageServerConnections:{id=3D'e604d0d2-0810-4c25-b9ed-610f=
9923cb1a', connection=3D'nfsserv-01:/nfs/export', iqn=3D'null', vfsType=3D'=
null', mountOptions=3D'null', nfsVersion=3D'V3', nfsRetrans=3D'null', nfsTi=
meo=3D'null', iface=3D'null', netIfaceName=3D'null'}, StorageServerConnecti=
ons:{id=3D'2c1aba4b-786b-4c76-a02f-d9e09e8afff4', connection=3D'nfsserv-01:=
/nfs/iso', iqn=3D'null', vfsType=3D'null', mountOptions=3D'null', nfsVersio=
n=3D'V3', nfsRetrans=3D'null', nfsTimeo=3D'null', iface=3D'null', netIfaceN=
ame=3D'null'}]'}), log id: 15c54cd7<br>2017-05-11 17:28:14,301 INFO  [=
org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [=
] Connecting to rhvserv-03.mydomain.com/192.168.93.213<br>2017-05-11 17:28:=
14,304 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSC=
ommand] (DefaultQuartzScheduler1) [62006eaa] Command 'GetCapabilitiesVDSCom=
mand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAsync=
=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhv=
serv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.ovi=
rt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2017=
-05-11 17:28:14,304 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostM=
onitoring] (DefaultQuartzScheduler1) [62006eaa] Failure to refresh host 'rh=
vserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExcep=
tion: Connection failed<br>2017-05-11 17:28:14,334 ERROR [org.ovirt.engine.=
core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler=
7) [1efe65a7] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, V=
dsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b=
64a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167=
-43fcedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientCo=
nnectionException: Connection failed<br>2017-05-11 17:28:14,334 ERROR [org.=
ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzSchedu=
ler7) [1efe65a7] Failure to refresh host 'rhvserv-04' runtime info: org.ovi=
rt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2017=
-05-11 17:28:17,308 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.Reac=
torClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192=
.168.93.213<br>2017-05-11 17:28:17,336 INFO  [org.ovirt.vdsm.jsonrpc.c=
lient.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-=
04.mydomain.com/192.168.93.214<br>2017-05-11 17:28:17,340 ERROR [org.ovirt.=
engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzSc=
heduler4) [] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, Vd=
sIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b6=
4a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-=
43fcedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientCon=
nectionException: Connection failed<br>2017-05-11 17:28:17,340 ERROR [org.o=
virt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzSchedul=
er4) [] Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.vdsm.j=
sonrpc.client.ClientConnectionException: Connection failed<br>2017-05-11 17=
:28:20,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilities=
VDSCommand] (DefaultQuartzScheduler1) [62006eaa] Command 'GetCapabilitiesVD=
SCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runA=
sync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host=
[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org=
.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>=
2017-05-11 17:28:20,315 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.H=
ostMonitoring] (DefaultQuartzScheduler1) [62006eaa] Failure to refresh host=
 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionE=
xception: Connection failed<br>2017-05-11 17:28:20,343 INFO  [org.ovir=
t.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connec=
ting to rhvserv-04.mydomain.com/192.168.93.214<br>2017-05-11 17:28:23,316 I=
NFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp=
 Reactor) [] Connecting to rhvserv-03.mydomain.com/192.168.93.213<br>2017-0=
5-11 17:28:23,325 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapab=
ilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilitiesVDS=
Command(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAs=
ync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[=
rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.=
ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2=
017-05-11 17:28:23,325 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.Ho=
stMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvserv=
-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:=
 Connection failed<br>2017-05-11 17:28:23,348 ERROR [org.ovirt.engine.core.=
vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1=
efe65a7] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdA=
ndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-4=
2bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fc=
edd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnect=
ionException: Connection failed<br>2017-05-11 17:28:23,348 ERROR [org.ovirt=
.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler7)=
 [1efe65a7] Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.vd=
sm.jsonrpc.client.ClientConnectionException: Connection failed<br>2017-05-1=
1 17:28:26,329 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorCl=
ient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.168.=
93.213<br>2017-05-11 17:28:26,350 INFO  [org.ovirt.vdsm.jsonrpc.client=
.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.my=
domain.com/192.168.93.214<br>2017-05-11 17:28:26,354 ERROR [org.ovirt.engin=
e.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzSchedul=
er7) [1efe65a7] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04,=
 VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690=
-b64a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a1=
67-43fcedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.Client=
ConnectionException: Connection failed<br>2017-05-11 17:28:26,354 ERROR [or=
g.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzSche=
duler7) [1efe65a7] Failure to refresh host 'rhvserv-04' runtime info: org.o=
virt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>20=
17-05-11 17:28:29,335 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetC=
apabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilitie=
sVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{r=
unAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'H=
ost[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: =
org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<=
br>2017-05-11 17:28:29,335 ERROR [org.ovirt.engine.core.vdsbroker.monitorin=
g.HostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhv=
serv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExcept=
ion: Connection failed<br>2017-05-11 17:28:29,356 INFO  [org.ovirt.vds=
m.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting =
to rhvserv-04.mydomain.com/192.168.93.214<br>2017-05-11 17:28:32,337 INFO&n=
bsp; [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reac=
tor) [] Connecting to rhvserv-03.mydomain.com/192.168.93.213<br>2017-05-11 =
17:28:32,340 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabiliti=
esVDSCommand] (DefaultQuartzScheduler10) [] Command 'GetCapabilitiesVDSComm=
and(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAsync=
=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhv=
serv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.ovi=
rt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2017=
-05-11 17:28:32,340 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostM=
onitoring] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvserv-0=
3' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: C=
onnection failed<br>2017-05-11 17:28:32,362 ERROR [org.ovirt.engine.core.vd=
sbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler9) [4cf=
3eed0] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAnd=
VdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42b=
b-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fced=
d634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectio=
nException: Connection failed<br>2017-05-11 17:28:32,362 ERROR [org.ovirt.e=
ngine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler9) [=
4cf3eed0] Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.vdsm=
.jsonrpc.client.ClientConnectionException: Connection failed<br>2017-05-11 =
17:28:35,342 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClie=
nt] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.168.93=
.213<br>2017-05-11 17:28:35,370 INFO  [org.ovirt.vdsm.jsonrpc.client.r=
eactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydo=
main.com/192.168.93.214<br>2017-05-11 17:28:38,349 ERROR [org.ovirt.engine.=
core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler=
7) [1efe65a7] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-03, V=
dsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'4036f027-8=
e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5=
-3ddb8d586916]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientCo=
nnectionException: Connection failed<br>2017-05-11 17:28:38,349 ERROR [org.=
ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzSchedu=
ler7) [1efe65a7] Failure to refresh host 'rhvserv-03' runtime info: org.ovi=
rt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2017=
-05-11 17:28:38,374 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCap=
abilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilitiesV=
DSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:{run=
Async=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D'Hos=
t[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed: or=
g.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br=
>2017-05-11 17:28:38,375 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.=
HostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvse=
rv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExceptio=
n: Connection failed<br>2017-05-11 17:28:41,352 INFO  [org.ovirt.vdsm.=
jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to=
 rhvserv-03.mydomain.com/192.168.93.213<br>2017-05-11 17:28:41,354 ERROR [o=
rg.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (Defaul=
tQuartzScheduler9) [4cf3eed0] Command 'GetCapabilitiesVDSCommand(HostName =
=3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hos=
tId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhvserv-03,4036f0=
27-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.ovirt.vdsm.jsonrp=
c.client.ClientConnectionException: Connection failed<br>2017-05-11 17:28:4=
1,354 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (De=
faultQuartzScheduler9) [4cf3eed0] Failure to refresh host 'rhvserv-03' runt=
ime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connecti=
on failed<br>2017-05-11 17:28:41,377 INFO  [org.ovirt.vdsm.jsonrpc.cli=
ent.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04=
.mydomain.com/192.168.93.214<br>2017-05-11 17:28:41,380 ERROR [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzSche=
duler7) [1efe65a7] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-=
04, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd=
690-b64a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb=
-a167-43fcedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.Cli=
entConnectionException: Connection failed<br>2017-05-11 17:28:41,380 ERROR =
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzS=
cheduler7) [1efe65a7] Failure to refresh host 'rhvserv-04' runtime info: or=
g.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br=
>2017-05-11 17:28:44,358 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors=
.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.co=
m/192.168.93.213<br>2017-05-11 17:28:44,382 INFO  [org.ovirt.vdsm.json=
rpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhv=
serv-04.mydomain.com/192.168.93.214<br>2017-05-11 17:28:47,362 ERROR [org.o=
virt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQua=
rtzScheduler1) [62006eaa] Command 'GetCapabilitiesVDSCommand(HostName =3D r=
hvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=
=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhvserv-03,4036f027-=
8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.ovirt.vdsm.jsonrpc.c=
lient.ClientConnectionException: Connection failed<br>2017-05-11 17:28:47,3=
63 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (Defau=
ltQuartzScheduler1) [62006eaa] Failure to refresh host 'rhvserv-03' runtime=
 info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection =
failed<br>2017-05-11 17:28:47,388 ERROR [org.ovirt.engine.core.vdsbroker.vd=
sbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler7) [1efe65a7] Com=
mand 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSComm=
andParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43f=
cedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})=
' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed<br>2017-05-11 17:28:47,388 ERROR [org.ovirt.engine.core=
.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler7) [1efe65a7] =
Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.c=
lient.ClientConnectionException: Connection failed<br>2017-05-11 17:28:50,3=
65 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL S=
tomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.168.93.213<br>20=
17-05-11 17:28:50,368 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetC=
apabilitiesVDSCommand] (DefaultQuartzScheduler5) [48bc69cd] Command 'GetCap=
abilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParameter=
sBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', =
vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution =
failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection=
 failed<br>2017-05-11 17:28:50,368 ERROR [org.ovirt.engine.core.vdsbroker.m=
onitoring.HostMonitoring] (DefaultQuartzScheduler5) [48bc69cd] Failure to r=
efresh host 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.Client=
ConnectionException: Connection failed<br>2017-05-11 17:28:50,390 INFO =
; [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor=
) [] Connecting to rhvserv-04.mydomain.com/192.168.93.214<br>2017-05-11 17:=
28:50,394 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesV=
DSCommand] (DefaultQuartzScheduler2) [590af365] Command 'GetCapabilitiesVDS=
Command(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:{runAs=
ync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D'Host[=
rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed: org.=
ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2=
017-05-11 17:28:50,394 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.Ho=
stMonitoring] (DefaultQuartzScheduler2) [590af365] Failure to refresh host =
'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionEx=
ception: Connection failed<br>2017-05-11 17:28:53,371 INFO  [org.ovirt=
.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connect=
ing to rhvserv-03.mydomain.com/192.168.93.213<br>2017-05-11 17:28:53,396 IN=
FO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp =
Reactor) [] Connecting to rhvserv-04.mydomain.com/192.168.93.214<br>2017-05=
-11 17:28:56,379 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabi=
litiesVDSCommand] (DefaultQuartzScheduler10) [] Command 'GetCapabilitiesVDS=
Command(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAs=
ync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[=
rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.=
ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2=
017-05-11 17:28:56,379 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.Ho=
stMonitoring] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvser=
v-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException=
: Connection failed<br>2017-05-11 17:28:56,402 ERROR [org.ovirt.engine.core=
.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler5) [=
48bc69cd] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, VdsId=
AndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-=
42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43f=
cedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnec=
tionException: Connection failed<br>2017-05-11 17:28:56,403 ERROR [org.ovir=
t.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler5=
) [48bc69cd] Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.v=
dsm.jsonrpc.client.ClientConnectionException: Connection failed<br>2017-05-=
11 17:28:56,488 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmSt=
atsVDSCommand] (DefaultQuartzScheduler6) [] Command 'GetAllVmStatsVDSComman=
d(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D=
'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvser=
v-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed: VDSGeneric=
Exception: VDSNetworkException: Vds timeout occured<br>2017-05-11 17:28:56,=
488 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefre=
sher] (DefaultQuartzScheduler6) [] Failed to fetch vms info for host 'rhvse=
rv-04' - skipping VMs monitoring.<br>2017-05-11 17:28:59,382 INFO  [or=
g.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] =
Connecting to rhvserv-03.mydomain.com/192.168.93.213<br>2017-05-11 17:28:59=
,384 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCom=
mand] (DefaultQuartzScheduler10) [] Command 'GetCapabilitiesVDSCommand(Host=
Name =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true'=
, hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vds=3D'Host[rhvserv-03,4=
036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution failed: org.ovirt.vdsm.j=
sonrpc.client.ClientConnectionException: Connection failed<br>2017-05-11 17=
:28:59,384 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring=
] (DefaultQuartzScheduler10) [] Failure to refresh host 'rhvserv-03' runtim=
e info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection=
 failed<br>2017-05-11 17:28:59,404 INFO  [org.ovirt.vdsm.jsonrpc.clien=
t.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.m=
ydomain.com/192.168.93.214<br>2017-05-11 17:28:59,408 ERROR [org.ovirt.engi=
ne.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzSchedu=
ler6) [] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdA=
ndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-4=
2bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fc=
edd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnect=
ionException: Connection failed<br>2017-05-11 17:28:59,408 ERROR [org.ovirt=
.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler6)=
 [] Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonr=
pc.client.ClientConnectionException: Connection failed<br>2017-05-11 17:29:=
02,386 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (S=
SL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.168.93.213<b=
r>2017-05-11 17:29:02,409 INFO  [org.ovirt.vdsm.jsonrpc.client.reactor=
s.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.c=
om/192.168.93.214<br>2017-05-11 17:29:02,490 ERROR [org.ovirt.engine.core.v=
dsbroker.vdsbroker.GetAllVmStatsVDSCommand] (DefaultQuartzScheduler8) [56fb=
4c82] Command 'GetAllVmStatsVDSCommand(HostName =3D rhvserv-03, VdsIdAndVds=
VDSCommandParametersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8=
ca5-3ddb8d586916', vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586=
916]'})' execution failed: VDSGenericException: VDSNetworkException: Vds ti=
meout occured<br>2017-05-11 17:29:02,490 INFO  [org.ovirt.engine.core.=
vdsbroker.monitoring.PollVmStatsRefresher] (DefaultQuartzScheduler8) [56fb4=
c82] Failed to fetch vms info for host 'rhvserv-03' - skipping VMs monitori=
ng.<br>2017-05-11 17:29:05,393 ERROR [org.ovirt.engine.core.vdsbroker.vdsbr=
oker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCa=
pabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParamete=
rsBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916',=
 vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution=
 failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connectio=
n failed<br>2017-05-11 17:29:05,393 ERROR [org.ovirt.engine.core.vdsbroker.=
monitoring.HostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh =
host 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnect=
ionException: Connection failed<br>2017-05-11 17:29:05,415 ERROR [org.ovirt=
.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzS=
cheduler10) [] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-04, =
VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd690-=
b64a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a16=
7-43fcedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientC=
onnectionException: Connection failed<br>2017-05-11 17:29:05,415 ERROR [org=
.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzSched=
uler10) [] Failure to refresh host 'rhvserv-04' runtime info: org.ovirt.vds=
m.jsonrpc.client.ClientConnectionException: Connection failed<br>2017-05-11=
 17:29:08,395 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorCli=
ent] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.com/192.168.9=
3.213<br>2017-05-11 17:29:08,398 ERROR [org.ovirt.engine.core.vdsbroker.vds=
broker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler8) [56fb4c82] Comm=
and 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSComma=
ndParametersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb=
8d586916', vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})'=
 execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException:=
 Connection failed<br>2017-05-11 17:29:08,398 ERROR [org.ovirt.engine.core.=
vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler8) [56fb4c82] F=
ailure to refresh host 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.cl=
ient.ClientConnectionException: Connection failed<br>2017-05-11 17:29:08,41=
7 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL St=
omp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.168.93.214<br>201=
7-05-11 17:29:08,420 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCa=
pabilitiesVDSCommand] (DefaultQuartzScheduler2) [590af365] Command 'GetCapa=
bilitiesVDSCommand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParameters=
Base:{runAsync=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', v=
ds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution f=
ailed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection =
failed<br>2017-05-11 17:29:08,420 ERROR [org.ovirt.engine.core.vdsbroker.mo=
nitoring.HostMonitoring] (DefaultQuartzScheduler2) [590af365] Failure to re=
fresh host 'rhvserv-04' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientC=
onnectionException: Connection failed<br>2017-05-11 17:29:11,402 INFO =
 [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor)=
 [] Connecting to rhvserv-03.mydomain.com/192.168.93.213<br>2017-05-11 17:2=
9:11,423 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] =
(SSL Stomp Reactor) [] Connecting to rhvserv-04.mydomain.com/192.168.93.214=
<br>2017-05-11 17:29:14,409 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroke=
r.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapab=
ilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCommandParametersB=
ase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3ddb8d586916', vd=
s=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'})' execution fa=
iled: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection f=
ailed<br>2017-05-11 17:29:14,409 ERROR [org.ovirt.engine.core.vdsbroker.mon=
itoring.HostMonitoring] (DefaultQuartzScheduler4) [] Failure to refresh hos=
t 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnection=
Exception: Connection failed<br>2017-05-11 17:29:14,430 ERROR [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzSche=
duler5) [48bc69cd] Command 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-=
04, VdsIdAndVdsVDSCommandParametersBase:{runAsync=3D'true', hostId=3D'0d0cd=
690-b64a-42bb-a167-43fcedd634e4', vds=3D'Host[rhvserv-04,0d0cd690-b64a-42bb=
-a167-43fcedd634e4]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.Cli=
entConnectionException: Connection failed<br>2017-05-11 17:29:14,430 ERROR =
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzS=
cheduler5) [48bc69cd] Failure to refresh host 'rhvserv-04' runtime info: or=
g.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br=
>2017-05-11 17:29:17,411 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors=
.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhvserv-03.mydomain.co=
m/192.168.93.213<br>2017-05-11 17:29:17,414 ERROR [org.ovirt.engine.core.vd=
sbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler6) [] C=
ommand 'GetCapabilitiesVDSCommand(HostName =3D rhvserv-03, VdsIdAndVdsVDSCo=
mmandParametersBase:{runAsync=3D'true', hostId=3D'4036f027-8e90-49c0-8ca5-3=
ddb8d586916', vds=3D'Host[rhvserv-03,4036f027-8e90-49c0-8ca5-3ddb8d586916]'=
})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionExcepti=
on: Connection failed<br>2017-05-11 17:29:17,414 ERROR [org.ovirt.engine.co=
re.vdsbroker.monitoring.HostMonitoring] (DefaultQuartzScheduler6) [] Failur=
e to refresh host 'rhvserv-03' runtime info: org.ovirt.vdsm.jsonrpc.client.=
ClientConnectionException: Connection failed<br>2017-05-11 17:29:17,432 INF=
O  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp R=
eactor) [] Connecting to rhvserv-04.mydomain.com/192.168.93.214<br>2017-05-=
11 17:29:17,436 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabil=
itiesVDSCommand] (DefaultQuartzScheduler4) [] Command 'GetCapabilitiesVDSCo=
mmand(HostName =3D rhvserv-04, VdsIdAndVdsVDSCommandParametersBase:{runAsyn=
c=3D'true', hostId=3D'0d0cd690-b64a-42bb-a167-43fcedd634e4', vds=3D'Host[rh=
vserv-04,0d0cd690-b64a-42bb-a167-43fcedd634e4]'})' execution failed: org.ov=
irt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed<br>201=
7-05-11 17:29:17,436 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.Host=
Monitoring] (DefaultQuartzScheduler4) [] Failure to refresh host 'rhvserv-0=
4' runtime info: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: C=
onnection failed<br>2017-05-11 17:29:17,491 INFO  [org.ovirt.vdsm.json=
rpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to rhv=
serv-03.mydomain.com/192.168.93.213</div><div><br></div><div><br></div><div=
>vdsmd logs<br></div><div><br></div><div><br></div><div>Thread-113::ERROR::=
2017-05-11 17:41:33,734::sdc::146::Storage.StorageDomainCache::(_findDomain=
) domain 5b978dda-d1ef-46fe-9996-20aee42cf303 not found<br>Traceback (most =
recent call last):<br>  File "/usr/share/vdsm/storage/sdc.py", line 14=
4, in _findDomain<br>    dom =3D findMethod(sdUUID)<br> =
; File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain<=
br>    raise se.StorageDomainDoesNotExist(sdUUID)<br>Storage=
DomainDoesNotExist: Storage domain does not exist: (u'5b978dda-d1ef-46fe-99=
96-20aee42cf303',)<br>Thread-113::ERROR::2017-05-11 17:41:33,735::monitor::=
328::Storage.Monitor::(_setupLoop) Setting up monitor for 5b978dda-d1ef-46f=
e-9996-20aee42cf303 failed<br>Traceback (most recent call last):<br>  =
File "/usr/share/vdsm/storage/monitor.py", line 325, in _setupLoop<br> =
;   self._setupMonitor()<br>  File "/usr/share/vdsm/storage/=
monitor.py", line 348, in _setupMonitor<br>    self._produce=
Domain()<br>  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", l=
ine 405, in wrapper<br>    value =3D meth(self, *a, **kw)<br=
>  File "/usr/share/vdsm/storage/monitor.py", line 366, in _produceDom=
ain<br>    self.domain =3D sdCache.produce(self.sdUUID)<br>&=
nbsp; File "/usr/share/vdsm/storage/sdc.py", line 101, in produce<br> =
   domain.getRealDomain()<br>  File "/usr/share/vdsm/storage=
/sdc.py", line 53, in getRealDomain<br>    return self._cach=
e._realProduce(self._sdUUID)<br>  File "/usr/share/vdsm/storage/sdc.py=
", line 125, in _realProduce<br>    domain =3D self._findDom=
ain(sdUUID)<br>  File "/usr/share/vdsm/storage/sdc.py", line 144, in _=
findDomain<br>    dom =3D findMethod(sdUUID)<br>  File =
"/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain<br>&nbs=
p;   raise se.StorageDomainDoesNotExist(sdUUID)<br>StorageDomainD=
oesNotExist: Storage domain does not exist: (u'5b978dda-d1ef-46fe-9996-20ae=
e42cf303',)<br>Thread-112::DEBUG::2017-05-11 17:41:33,783::lvm::288::Storag=
e.Misc.excCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br>Threa=
d-112::INFO::2017-05-11 17:41:33,783::sd::604::Storage.StorageDomain::(_reg=
isterResourceNamespaces) Resource namespace edd229cb-b72f-4988-8c10-d83c84e=
f4a8a_imageNS already registered<br>Thread-112::INFO::2017-05-11 17:41:33,7=
83::sd::612::Storage.StorageDomain::(_registerResourceNamespaces) Resource =
namespace edd229cb-b72f-4988-8c10-d83c84ef4a8a_volumeNS already registered<=
br>Thread-112::INFO::2017-05-11 17:41:33,783::blockSD::846::Storage.Storage=
Domain::(_registerResourceNamespaces) Resource namespace edd229cb-b72f-4988=
-8c10-d83c84ef4a8a_lvmActivationNS already registered<br>Thread-112::DEBUG:=
:2017-05-11 17:41:33,784::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/tas=
kset --cpu-list 0-15 /usr/bin/sudo -n /usr/sbin/lvm vgck --config ' devices=
 { preferred_names =3D ["^/dev/mapper/"] ignore_suspended_devices=3D1 write=
_cache_state=3D0 disable_after_error_count=3D3 filter =3D [ '\''a|/dev/mapp=
er/36782bcb00073e33200002d3858fa1a81|'\'', '\''r|.*|'\'' ] }  global {=
  locking_type=3D1  prioritise_write_locks=3D1  wait_for_loc=
ks=3D1  use_lvmetad=3D0 }  backup {  retain_min =3D 50 =
 retain_days =3D 0 } ' edd229cb-b72f-4988-8c10-d83c84ef4a8a (cwd None)<br>j=
sonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,792::lvm::288::Storage.Misc.e=
xcCmd::(cmd) SUCCESS: <err> =3D ''; <rc> =3D 0<br>jsonrpc.Execu=
tor/6::INFO::2017-05-11 17:41:33,792::logUtils::52::dispatcher::(wrapper) R=
un and protect: getStorageDomainStats, Return response: {'stats': {'mdasize=
': '134217728', 'mdathreshold': True, 'mdavalid': True, 'diskfree': '498484=
641792', 'disktotal': '557943095296', 'mdafree': '67102208'}}<br>jsonrpc.Ex=
ecutor/6::DEBUG::2017-05-11 17:41:33,793::task::1193::Storage.TaskManager.T=
ask::(prepare) Task=3D`234cbec4-a422-47c1-a50a-243526aeddc7`::finished: {'s=
tats': {'mdasize': '134217728', 'mdathreshold': True, 'mdavalid': True, 'di=
skfree': '498484641792', 'disktotal': '557943095296', 'mdafree': '67102208'=
}}<br>jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::task::597::Storag=
e.TaskManager.Task::(_updateState) Task=3D`234cbec4-a422-47c1-a50a-243526ae=
ddc7`::moving from state preparing -> state finished<br>jsonrpc.Executor=
/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::952::Storage.ResourceM=
anager.Owner::(releaseAll) Owner.releaseAll requests {} resources {u'Storag=
e.595de2cf-89ba-407b-aae5-d0a7a0656ba1': < ResourceRef 'Storage.595de2cf=
-89ba-407b-aae5-d0a7a0656ba1', isValid: 'True' obj: 'None'>}<br>jsonrpc.=
Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::989::Storage.R=
esourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>jsonrpc.Ex=
ecutor/6::DEBUG::2017-05-11 17:41:33,793::resourceManager::628::Storage.Res=
ourceManager::(releaseResource) Trying to release resource 'Storage.595de2c=
f-89ba-407b-aae5-d0a7a0656ba1'<br>jsonrpc.Executor/6::DEBUG::2017-05-11 17:=
41:33,793::resourceManager::647::Storage.ResourceManager::(releaseResource)=
 Released resource 'Storage.595de2cf-89ba-407b-aae5-d0a7a0656ba1' (0 active=
 users)<br>jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceMana=
ger::653::Storage.ResourceManager::(releaseResource) Resource 'Storage.595d=
e2cf-89ba-407b-aae5-d0a7a0656ba1' is free, finding out if anyone is waiting=
 for it.<br>jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::resourceMan=
ager::661::Storage.ResourceManager::(releaseResource) No one is waiting for=
 resource 'Storage.595de2cf-89ba-407b-aae5-d0a7a0656ba1', Clearing records.=
<br>jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,793::task::995::Storage.=
TaskManager.Task::(_decref) Task=3D`234cbec4-a422-47c1-a50a-243526aeddc7`::=
ref 0 aborting False<br>jsonrpc.Executor/6::DEBUG::2017-05-11 17:41:33,794:=
:__init__::555::jsonrpc.JsonRpcServer::(_handle_request) Return 'StorageDom=
ain.getStats' in bridge with {'mdasize': '134217728', 'mdathreshold': True,=
 'mdavalid': True, 'diskfree': '498484641792', 'disktotal': '557943095296',=
 'mdafree': '67102208'}<br>jsonrpc.Executor/6::INFO::2017-05-11 17:41:33,79=
4::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call StorageDo=
main.getStats succeeded in 1.05 seconds<br>JsonRpc (StompReactor)::ERROR::2=
017-05-11 17:41:33,816::betterAsyncore::113::vds.dispatcher::(recv) SSL err=
or during reading data: unexpected eof<br>Thread-112::DEBUG::2017-05-11 17:=
41:33,846::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> =3D ''=
; <rc> =3D 0<br>Thread-12::DEBUG::2017-05-11 17:41:34,913::check::296=
::storage.check::(_start_process) START check '/dev/595de2cf-89ba-407b-aae5=
-d0a7a0656ba1/metadata' cmd=3D['/usr/bin/taskset', '--cpu-list', '0-15', '/=
usr/bin/dd', 'if=3D/dev/595de2cf-89ba-407b-aae5-d0a7a0656ba1/metadata', 'of=
=3D/dev/null', 'bs=3D4096', 'count=3D1', 'iflag=3Ddirect'] delay=3D0.00<br>=
Thread-12::DEBUG::2017-05-11 17:41:34,954::asyncevent::564::storage.asyncev=
ent::(reap) Process <cpopen.CPopen object at 0x1b15b90> terminated (c=
ount=3D1)<br>Thread-12::DEBUG::2017-05-11 17:41:34,954::check::327::storage=
.check::(_check_completed) FINISH check '/dev/595de2cf-89ba-407b-aae5-d0a7a=
0656ba1/metadata' rc=3D0 err=3Dbytearray(b'1+0 records in\n1+0 records out\=
n4096 bytes (4.1 kB) copied, 0.000537649 s, 7.6 MB/s\n') elapsed=3D0.05<br>=
Reactor thread::INFO::2017-05-11 17:41:35,823::protocoldetector::76::Protoc=
olDetector.AcceptorImpl::(handle_accept) Accepted connection from ::1:49348=
<br>Reactor thread::DEBUG::2017-05-11 17:41:35,827::protocoldetector::92::P=
rotocolDetector.Detector::(__init__) Using required_size=3D11<br>Reactor th=
read::INFO::2017-05-11 17:41:35,828::protocoldetector::128::ProtocolDetecto=
r.Detector::(handle_read) Detected protocol stomp from ::1:49348<br>Reactor=
 thread::INFO::2017-05-11 17:41:35,828::stompreactor::101::Broker.StompAdap=
ter::(_cmd_connect) Processing CONNECT request<br>Reactor thread::DEBUG::20=
17-05-11 17:41:35,829::stompreactor::492::protocoldetector.StompDetector::(=
handle_socket) Stomp detected from ('::1', 49348)<br>JsonRpc (StompReactor)=
::INFO::2017-05-11 17:41:35,829::stompreactor::128::Broker.StompAdapter::(_=
cmd_subscribe) Subscribe command received</div></div></body></html>
------=_Part_11446852_617310615.1494517429918--
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
     I am trying to do an unattended install of hosted engine through a
script. All the required parameters are being read from answer file and
that works fine. Is there a way to provide passwords for the ovirt engine
linux box and admin console through answer file.
Thanks and Regards,
Ram
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            3
                            
                          
                          
                            
    
                          
                        
                    
                    
                        # sudo yum install -y ovirt-engine
is not found
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        sudo yum install -y ovirt-engine
You do not want to work
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        --_000_CY4PR11MB16715763BED023C8191F455190ED0CY4PR11MB1671namp_
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: base64
TXkgZW52aXJvbm1lbnQ6DQpBIGZvciBFbmdpbmUsTkZTIHNlcnZlcg0KQiBmb3Igbm9kZTENCkMg
Zm9yIG5pZGUyDQpPdmlydDMuNi43DQoNCkV2ZXJ5dGhpbmcgaXMgcmVhZHkoTGlrZSBpcG1pKS5C
dXQuLi4uLi4NCjEuaG90LW1pZ3JhdGlvbiB3aGl0aW4gQiBhbmQgQw0KSSBmb3VuZCBJdCBpcyBv
a2F5LGFmdGVyIG1pZ3JhdGlvbiAsQiBPciBDIFdvdWxkIGNoYW5nZSBJdHMgaG9zdG5hbWUsd2hp
Y2ggY2F1c2VkIGZhaWx1cmUgbmV4dCB0aW1lIHRvIG1pZ3JhdGlvbi4oaSBEb24ndCBrbm93IHdo
ZXRoZXIgZmlyZXdhbGwgd2lsbCBtYWtlIGEgYWZmZWN0LikNCg0KMi5JZiBpIGNob29zZWQgdG8g
bWFpbnRhaW4gdGhlIEIgT3IgQyxpdCB3b3JrZWQgd2VsbC4NCg0KMy5JZiBpIHRyeSB0byBzaHV0
ZG93biBCICAsdGhlIHZtIGNhbm5vdCBzdGFydHVwIG9uIEMgYXV0b21hdGljYWxseSAuYW5kIGFm
dGVyIEIgc3RhcnRlZCBCeSBwb3dlciBtYW5hZ2VyICxpIGZvdW5kIEl0IGNhbm5vdCBiZSBtb3Vu
dCB0aGUgZGF0YWNlbnRyZSAuDQoNCldoYXQgaXMgd3Jvbmc/DQoNCkJlc3QgcmVnYXJkDQoNCg0K
R2V0IE91dGxvb2sgZm9yIEFuZHJvaWQ8aHR0cHM6Ly9ha2EubXMvZ2hlaTM2Pg0KDQo=
--_000_CY4PR11MB16715763BED023C8191F455190ED0CY4PR11MB1671namp_
Content-Type: text/html; charset="gb2312"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dgb2312">
</head>
<body>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
My environment:<br>
A for Engine,NFS server<br>
B for node1<br>
C for nide2<br>
Ovirt3.6.7</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
Everything is ready(Like ipmi).But......<br>
1.hot-migration whitin B and C<br>
I found It is okay,after migration ,B Or C Would change Its hostname,which =
caused failure next time to migration.(i Don't know whether firewall will m=
ake a affect.)</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
2.If i choosed to maintain the B Or C,it worked well.</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
3.If i try to shutdown B  ,the vm cannot startup on C automatically .a=
nd after B started By power manager ,i found It cannot be mount the datacen=
tre .</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
What is wrong?</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
Best regard<br>
<br>
</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
</p>
<p dir=3D"auto" style=3D" text-align: left; margin-top: 25px; margin-bottom=
: 25px; font-family: sans-serif; font-size: 11pt; color: black; background-=
color: white ">
Get <a href=3D"https://aka.ms/ghei36">Outlook for Android</a></p>
<br>
<p></p>
</body>
</html>
--_000_CY4PR11MB16715763BED023C8191F455190ED0CY4PR11MB1671namp_--
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        I though I would try something different so I removed the key group and
value virt when creating the volumes. I was able to create the volumes. I
stopped the data volume. I then tried to set the volume group by issuing a
"gluster volume data group virt"
Still get "unable to open file '/var/lib/gluster/Virt'. Error: No such file
or directory.
I don't know if I should continue with the setup until I can set this value.
Please let me know if I can provide some logs to identify the issue.
Thanks for your help,
Joel
On May 10, 2017 6:57 AM, "Joel Diaz" <mrjoeldiaz(a)gmail.com> wrote:
Thanks for the reply.
The file is empty. I created the file by issuing a "touch
/var/lib/glusterd/groups/virt"
The first time I attempted to set the group volume, the error was that the
file was missing. I read a bug report that advised to remove and reinstall
the gluster package in order to properly recreate the file. Since that did
not work, I created it manually.
Thank you,
Joel
On Wed, May 10, 2017 at 2:12 AM, knarra <knarra(a)redhat.com> wrote:
> On 05/10/2017 06:37 AM, Joel Diaz wrote:
>
> Hello ovirt users,
>
> First off all, thanks for your work. I've been using the software for a
> few months and the experience has been great.
>
> I'm having a hard time trying to set the group on a glusterfs volume
>
> PLAY [master] ************************************************************
> ******
>
> TASK [Sets options for volume] ******************************
> *******************
> failed: [192.168.170.141] (item={u'key': u'group', u'value': u'virt'}) =>
> {"failed": true, "item": {"key": "group", "value": "virt"}, "msg":
> "'/var/lib/glusterd/groups/virt' file format not valid.\n"}
>
> From this error it looks like the virt file format is not valid? Can you
> please paste the contents of this file?
>
> changed: [192.168.170.141] => (item={u'key': u'storage.owner-uid',
> u'value': u'36'})
> changed: [192.168.170.141] => (item={u'key': u'storage.owner-gid',
> u'value': u'36'})
> changed: [192.168.170.141] => (item={u'key': u'network.ping-timeout',
> u'value': u'30'})
> changed: [192.168.170.141] => (item={u'key': u'performance.strict-o-direct',
> u'value': u'on'})
> changed: [192.168.170.141] => (item={u'key': u'network.remote-dio',
> u'value': u'off'})
> changed: [192.168.170.141] => (item={u'key': u'cluster.granular-entry-heal',
> u'value': u'enable'})
>         to retry, use: --limit @/tmp/tmpdTWQ8B/gluster-volume-set.retry
>
> PLAY RECAP ************************************************************
> *********
> 192.168.170.141            : ok=0    changed=0    unreachable=0    failed=1
>
> I've tried to remove remove glusterfs, wiping the glusterfs configurations
> and reinstalling the service.
>
> Any help would be appreciated.
>
> Thank you,
>
> Joel
>
>
> _______________________________________________
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            3
                            
                          
                          
                            
    
                          
                        
                    11 May '17
                          
                            
                            2
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hello,
is ManageIQ able to manage oVirt version 4.1.1?
>From which version of ManageIQ has been included in case?
I see this link abot prblems with 4.x api and such:
https://github.com/ManageIQ/manageiq/issues/7573
Thanks,
Gianluca
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            7
                            
                          
                          
                            
    
                          
                        
                    
                        
                            
                                
                            
                             unhappiness with ovirt engine after upgrading file server handling storage domain
                        
                        
by Jason Keltz 11 May '17
                    by Jason Keltz 11 May '17
11 May '17
                    
                        Hi.
I recently upgraded my oVirt infrastructure to the latest 
4.1.1.8-1.el7.centos, which went smoothly.  Thanks oVirt team! This 
morning, I upgraded my NFS file server which manages the storage 
domain.  I stopped ovirt engine, did a yum update to bring the server 
from its older CentOS 7.2 release to CentOS 7.3, rebooted it, then 
restarted engine.   At that point, engine was unhappy because our 4 
virtualization hosts had a total of 30 VMs all waiting to reconnect to 
storage.  The status of all the VMs went to unknown in engine.  It took 
almost 2 hours before everything was completely normal again.  It seems 
that the hosts were available long before engine updated status.  I'm 
assuming it's better to restart engine when I know that NFS has resumed 
on all the 30 virtalized hosts.  However, it's hard to know when that's 
happened, without trying to connect manually to all the hosts.  Is there 
a way to warn engine that you're about to mess with the storage domain, 
and you don't want it to do anything drastic? Sort of like a 
"maintenance mode" for storage?    I would hate for it to start trying 
to power off hosts via power management or migrate hosts when it just 
needs to wait a bit...
Thanks!
Jason.
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        --=_c2ba53d929d9e1da449915d6c86f4971
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8;
 format=flowed
Hi,
We're using oVirt 4.1.1.8-1. I just edited a powered on VM to rename it 
and a popup was shown stating that a restart is required. Is that new, 
or is it a bug? I remember not having to do so in prior versions.
I'm attaching a snapshot.
Thanks.
Regards.
--=_c2ba53d929d9e1da449915d6c86f4971
Content-Transfer-Encoding: base64
Content-Type: image/png;
 name="Captura de pantalla de 2017-05-11 09-14-47.png"
Content-Disposition: attachment;
 filename="Captura de pantalla de 2017-05-11 09-14-47.png";
 size=12217
iVBORw0KGgoAAAANSUhEUgAAAY0AAAGfCAIAAAAs2xkWAAAAA3NCSVQICAjb4U/gAAAAGXRFWHRT
b2Z0d2FyZQBnbm9tZS1zY3JlZW5zaG907wO/PgAAIABJREFUeJzt3X9cU2eeL/DvKeSHSccc7SsS
oWtiX0YwdyzpVkakTkPn4oK6t2F2vAvaGcGpO0DvVGhtK96deqmdLTirFbW70K5Wqx2hu50V26nQ
K61k2hEddAyd2QCNtwSnQJiMctCGBEI5948TkB8JohZ9bD/vl682OTnnOU+ec/LJc54TzuFeeeWV
/v7+Tz/9tLu7mwAAbp8ZM2bMnz9fLpePmR75xRdfNDU1+f3+4PPIyMzMzISEhJkzZ0ZGRt7yegLA
N8XAwMClS5caGhoqKysHBgaIqLu7u7GxccGCBXfffffIObnHHntMmoOIfvKTnzz88MOIJwC4lQYG
Bn7zm9+89tpr0tPIyMiFCxeOjCouIyODiDQazS9+8QuNRjP8whdffNHS0tLY2FhfX3/lypVbXG8A
+Lr61re+tWTJkvj4+NjY2JFh1NPT89xzz/X09BCRQqEwm83DB4BcRkYGz/N79uyRyWREJIpia2vr
li1bhjtZAABTJDIycuvWrXPnzuU4jogCgcCTTz4pCAIRzZgx49vf/rY0W8S3v/3t3bt3q1QqIurr
69u7d+++ffsGBwdvY9UB4BticHDwgw8+uHjx4sKFCyMjIyMiIh5++OF3332XiPx+v0ajUSqVRBSx
e/fuuLg4Iurr63vxxRd///vf3+aKA8A3jMvl+uMf//jQQw9FRkYqFIqZM2eePXuWiHw+n06nI6K7
Hn74YSISRXH//v3nz5+/zfUFgG+k8+fP79+/XxRFIho+m3flyhWfz0dEd0nPW1tb6+rqbms9AeAb
ra6urrW1lYgiIyNTU1OliX/5y1+IiBNFURTFH/3oRxg4B4CvysyZM++5557p06er1Woi8nq9ly9f
vnjx4qVLlyZYKjIy8tChQxzHtbe3b9y4kYjUavXcuXMjpSKuEVKqJS++nm8cOcXrOllRtre2rfdm
345s3uN7fr6Mbz/89MZqerTk5TUxwvGfPbnvfOCmiyRn2ZPP2zzBiZolW17NN5FwvOglzz/8YoK1
qOZZUmKc1baO66/B1fdS+M6IxWXRq19+2aolImp/48mN1R4iUsVv3LM5QU1E/fbtuSVnrq8dw6yI
iFRLSl7PN9xImZMz1eXDnU+pVN57770xMTGtra1Op7O3t5eIVCoVz/MLFy5sb2///PPPh39YPsbA
wIDX67377rtnzpwpTZHL5fPnz7+LiFpaWiZZg36vIAjefiJSG5LWb1kfr/oK3tawAAV6vf393puI
KKmctvpGgYiMFtPw78FURss8IhIabP9vYIK1aJZsfPnneStM/E1WIRxtwjwNEZFMG29QT80qer39
/T29N9uGADdEqVTed999gUDg448/vnDhwuXLlwcGBgYGBi5fvnzhwoWPP/44EAjcd9990lm8kKQ4
UigU0tMrV64oFIpIIjp58uTk6tB+sHBjrYdIpn/0xW1rDOpFy02qxq/wa7Wj+vnHq7+CcgJOW6Ng
sfDzLCaNrb6HiFQmi0lOJDTWtwU6zodfi1obzRMJX0EdQpPrE2Jk9T0B3mjSTskKeuu3Pl4/JSUD
TMK9997r9Xrb2tpCvvrll1+2tbXp9fp777033Fm7kydPPvjgg9LPqYhIFMWIiIhIImpsbLy+ugQ6
zpxsX2OIkfNqGRGp5i3Py1uVEKOmfsFZX7F3r60tQBpLyat5Bo/tsF27zGLSyvs99rd37X7nfC8R
yaKXrM3LWmbkSXDabMOpIIu+etznSZpgce2itRvWS4sfP+o0Za2IaT/89MZ3Oq5W0Glr9Fos6nkW
o6r+TC+pjElSTNmcgZFr6V2+4+U1MR6bTVhkMcounP6TbnEMEfGWLYfij//s6dqEbdusWs+xnz19
8HxApl/98jarVjj2sycPng9o4lfnr19u0sqJyOux1+4tq2jsmbjVvC6P2qA1xkfLHB79omgiwenh
jUNxJdMuWbshy2Lk5UTkbW94u6ys+nwvEalMj+attSYY1ET9Hnv1yBXJtYvWv7jCIjXE3u37zvSM
Oi6TTbAJNPGr89YvN2vl1O9x1B4uq6j3jO2CyaKXrF2/xmLSyokEl+3tsr21bUPzyKOtG/csT9DK
+9sbKnbvrm4LEKnmPZqXZ02IURNRv8dpO7z7YL2HpGPe9mOHzxtXWIw8edtPVuwoq+0IEJEs2rJ+
w1qLQd3vcdQe8yRlWXjnrh8/X98bpnqaRWvzs1KCjd7ecHRf2TsOHHwyZObMmTExMR9//LH0dOXK
le+9997wq8NPP//886VLl166dCnkWFXIOLqLiL744ovrq45Kvygphoj6PUKAtJZntmQlxMjaHXZn
j8poyducd/VwUGtZY+6319qcXrnWvCYvJVpGJNOv2pi/zMiT4HJ4+CXWhPAHWWEWL3xGCimHh7dk
rYgJsWDAaWv0EslNFpOKSGZIMqmDMRVyLRZ9b7vH4/qjwyENZwkuR+P5CT4BmkV5G60mLbU77HaX
V601W9db58mu0Wq9HfZ2Im28kVcZEgxy8rqcQv/VOmzIX2bkAy673dHer45JyMqzRBPJolM2F65J
MKgFl9PlkWvN1o3PpAz3xLTL1sTLXA6Xl3jjsrwsU8ij8BBtKNM/unmz1awNuBx2Z6/WtCJ/y+qx
1VeZ1m/JX2bSksfpbPfyBsv6zeuHVyA3rVke7XE4PSSPScjKS4kmkpmynlmTEKP2OOx2pyDXGpet
X321QjErVs0TGo6ddPWrY5LWr0/SEJHMtHZznsWg7vc4zwvaFVmWof0gTPU0S/I3rDBpPXabzWZv
l8UkrNn4FQ88wE265557Wltbv/zySyJauXLljBkzVq5cKb008umXX37Z2tp6zz33hCwkZBxd158c
x6wtKV8VkKl4tZyISKivdgb01kdNchKOlzx/0BlQxee9/EzSIqtJ1eiUFvEcLd5R0RGI9sx7OStG
a9TKyBOdYokh8jZs37TjTI9s3tqXf74i7DHQDS8ecNU2epOS1CaLUdXoXRLPE3mlmAoRJ55jW58+
eD5ARNHT/tq8JkZoPFhc5gjI9OGqFfDUVhxtCzQcrT3fq1qypTzfpDHwMgrd1x3mPe8QlsVEL5oX
7TDxRM4zHQHT0Gv9rmNvHDX02t62dQS0KSV71ht4Ay+TyVKsRjl5T25/eveZXlX848+s0npkquG3
4Nz7/PO1HlX8xvLNCWq9nidHiEPWcW0o41esMBC1H966tbojwCdt3pFnsqwwVux2XE1xTfyqJTxR
++HCwnc6KDplQ14SeVXDQ2rC8ZLCfY5A9KM7Xl4TE23Sqqo9wsm3Dwuq88fecfSoTBv2bElSR2uv
trXzYMmOWo9snsz082V89DytzBYwLU/SEgnHtz6973xAFZ/38mYpqWTGkNV7+6hRKyfyehwNx2yN
nuPx8WrhvBPdKZZMnz7d6Qx+8t97772R2TRjxozu7u7h7pUgCEajMWxB41zfpRHkal76u8B+j9P2
dtlBR69skV5LRPyyLa8vG55Na9TKpNr2tzuFABEJQi8RkUxGJNMaeCLqOOnoIaJA2xmnd4U2zJjy
RIs7g4s7hBWWED2yXpdtKKiMHvPVmAq1FlfjdZ7c621rPKlWr1iWtyXPOC9GapNrdaeIAp5GV/8y
syHJwmuJ2htcXvPwaz3nzzSq1SnWjS/OMxhjpM+rTCaLNmiJqKPR2UtEvY37tkpdYtk8IiLyOJwC
EQW8Qi+RXM7LQtVhfBuqtHqeiGLWbHt9zdBc8mitimj4yFWmNWrlRILD4QkQUUftjudriYhINY+I
qL+j0RUgIsEzvF0CHY0nHaoUy/otaw3zDFo5Eclkw1cREjo6BCIK9Hq8RLxMLidSR0erifqdZzoC
RNTrbHD1W8xyIlJFh6yerONkQ/uKFTHmNc+Y15DX42w8/vZ5xzUbHW4htVotnd2TSFFlMpmIyOFw
jDwG7O3tlX6vMEnXlVPte5/cWOsZNUlGciIiz7FdexuHK+jt6AiQgYiIAuEDYOhDNVFEhF/86oW0
wgREr9Pm8CYlqOOty3u1RN7GWleYwgK9E9RS+hSO/D+RTJuyecd6k5w8zoaGWocxZZlh7HW9Qurv
ONNBZkPCMp5IOH++JzCcU8EzE9TvcZxpOHrebLVopYaRjXqHMpUsMOJU3vBJywlTdtybC9bVdXR7
xXAPKuB1heqaDP25ukylot6Rax43p2bRhm3PJPEkuOyNtce1lmUm9chq9feHrguNz9Zw1Qv0HHx+
k9O6Ytmi+HkxvNaYtGazkf9ZsCMMX2933eTyAY+zh4g00eRqbGx0qZKs1pRFhon6FgGP00NE0QkG
FRHJtPPmXdcJ+oDHJRCRfolZS0QqkyU+3OK9zlqHl0htStASeRttIT+IRKM/5QGi4VwIBLz9RKSS
zhbwMYahXznw8SkmOXkbijc9v2PfUUfvpEKKiAKCMzgARv0ddpd3+AVZtGWZQToG2rp7X61rKAYC
HVJbLTFpiEizaMOeQ5X7tqTc5KlCoaPNS0Q83+tsbHR4tCnW5ZZF2lFvIuBxevqJeNMivTQk+OLr
hyr3PG4Ku2FVxhVJPJGzbFNhSdnbZyZxxtTb0eElkhuSjCoi0sRbjPIJqyfTW1ZnWRPItn1j7tof
bXrDRURaYwwGqBji9XqlKxpIpOM+h8PhcDhGjlURkUql8nq9ocoYKxAIPPXUUzd7SbxAh+2Yc3mW
0Zy/7UVLB28yaeXe/uMT/QQq0FZ73LVijSFpwxY606GNTwo1Dj7h4sdcy9YYzHk79jzao4oJd8hI
waBKSFCTdNA3maEMb28vEaktG0qiT5ZtfbutzUsx6oQNW7Y4Kdo83GnyCkI/kdq03JoSLUtaZSIi
UsmvfeBHAU+jSzrK9TS09VL08PTeDi+Rljdbl6c4YiyrY0jqwgSctcecK7KM5vw9e1Z5VDExaiJn
g0Ogm/p0BpzHa9st1hhL4TatQ4g2G3kSPO+M3mY9jUfrBZNFu+Lne+LbA9oYLZHgqHcFKD5Mmb0e
L5Fan7JquazXZE1SExGpQh6IBvU6qk96EpZpkzbv0bf3amOGgzJc9QKqeEuSgRZFR9e3BbQmAxG1
N4T98oHb4PLlyzzPX758mYZCanhManisSno6PNs1yWSy1d/+1s32pyjQUb295HBDu5c3mk3aQHvD
G8W76yc8Qx/oeGfH9mNOQW1IssTLHIePtV/fCtuqd+yyubwk12qp7dgbDV6iMAeIvU6bo5+IyOuY
XExRj/3ocVc/Ea/l1XJZb2NF2XGXl9QGkyHQsPcN51C5jQf3nXR51Wbr+tUJdMbmIiJtvGES4dHr
OtNBRCQ4nKOOnz22sjcaPP3ahDXrV83z2k56iOQxJq0s0FG9fevhhnavXBsTo/Z67EeLt1ffwC/l
Rwucf7tk+zGHh7Qms5EXXLayrXvHnuDvbdy7tczmFPr5mBgtCU7b3uJx84ws0nG47LhTkBuXZa21
8M7jzn4irVk/UZMEHAe37z3Z7iV1jLa/8fBhRz9J/dnQ1Qt0VBcXH25oDxgSLJYkk8rjOLa9pLoN
B30MuXjx4ty5cyMiIojovffeGzlwPvJpRETE3LlzL168OKlCBweO/Pk7nCiKmZmZU1bzr57GtNwa
L/O4Gm31bb2kWlRY/oxZ7tj14631+Gq9k8i0i5YvN1KH84ztTEeANJYXX80zek8WP7m7ERvyzjVv
3rxAIBDud54SvV4vk8kmuDpLZWWlKIqrV68mIhocVFhW33mXQg+oTBZrgppWpSxr9FC0ySQncp6c
6NdOwKIAqRNWWI1E1hS7Q1AbzEaifsdJHMjd2T7//PP77rtPr9d//vnn0g+pRoqIiLj33nvVavVn
n302yQIHuYj/rpfd9HHfLdd7Zu/uo/Z2rzzGlGA2xQQ8jmPbd9s8114Q2OI5uXvXcYfHqzaYE8xG
rbfdfrRk75lr/KgfGOf3+z/77DOZTLZ06dI5c+ZMnz49MjIyMjJy+vTpc+bMWbp0qUwm++yzz8L9
HfJ4XMRdM6XrutxZx30AwL4bu66LZORxHxcR8djqO/C4DwDYF+7P966X9OfId95xHwB8cyCnAODO
gJwCANYhpwCAdZFEVFlZeburAQAQFvpTAMC64O8SzrW4bms1AACIiB6INYyfGDnxywAAt0y4DhOO
+wCAXaIo2u125BQAMG3GjBnIKQBgHXIKAFj39copv6v5Ru9lLLhck73SxB1GcLmm7gbPX0fXvxf5
3c3ur+new4iJc8rvqipMN+uUHMdxSkNidukpaQP6T2XzHKcrsDO0cYS6wmSdueCUn4j8Nckcx8WV
uia3qOtAZpwurXKSc98U94FEjuMSD7hvbEE+vebqR8hVnshxnC73vX8LW6a7KtdsSCxqvv7t1FwS
x3HKtJpRS0rbneM4LrlSWpvfnquTplzvewq3jfw1aRzHxZU0X3eVvwoj96LJ8dtL0wyG7Bp8F0yl
CXLK31yeFvf9bUcbyZxqTV3Mt51+46klyYUsZdNI7lNVtp4busqav7nyrZa+r7o+oSkNydZUa7JB
eb0L6tIKLEQ9NeWnhj4Qrpry00T6zNwHjeHK9Luqqq51O/kbc+qAXZAqUdf1FRetS85ItabFhb9L
9lS6/r1IsFe+33WL9p5vrvA55a4pKLD1KSxlTa5TNVU1p1xNOxdrNP7muqvHR66awmQDx3HKuPRS
Kb6EupL0OOkrV2lILqhx09CXcFxuSXaijuM4pSGtZKhb1nwgO5HnOI43Z5cWmjlOmVnnH5qu4ziO
482ZQ304d11h2tiih/nr0s2bW4jo/eXTDEPdPL+9UiqGN2cfkLoUfldlbqLUP+R4c2a53U/NJebl
7xNRy+YFXHLlyG/F5pI4jtOlZacZlByfVukmd11JWpwyWIGqoXZw1xQmG5QcpzSkl5SmKTkusVIg
v73AwHH80PspNHCcMr3OT35X3dH3j9a5/MEekjIxOz1OySkTS5tJOFWaaeY5juN0iUMVvkqXVpCq
oL6a0jqpjq6q8kai2NxcM4Up86/XJi55o4uo640l08ylLlepmeO45CqBiMhdbua44Pv128uD6+WU
usTcymsdACv0Guo7VWn3E7nqKltIE6sZfi3UDiC1UXDjKePSCkdsuxDbyF331vtHa5oFouZCA8cZ
Mktykw1KjuN0w+X5XVUF0jR+dHFhNlzI+a82N6c0JOZWuvzj9qJQLeM/lavjOENmdrKO4/Qpj8av
O01Ep9fNVqbX+aUXddnX0R+DSRFFURTF3ze3iqP5TlgVRIrU6m5xPF99loaIKCo1v7g4Z7GCiBaX
tYq+E1kaIsXinJ07izNiiSgq55xveObYrLIjR3amaogofmerKPrO5euJSGPJKS7OsWiIiBQZJ3xi
94msKCKKzdhUnJ8aRaRIregUfScyNET6jJ0VR/YXp2qI9DknfCNq1Hlkk0VDRPqs4v313b5qCxER
6a2bdhZnxRKRIvVItyi2li0OTt2Zb9EQkaWis/vc/iypHvk7j7SOLLOpOJaISLE4Kz8nv6Kzdedi
abbi4qx4IorddM43VCTprfmb8q3Sjd4XV3QH35wm44RPFEVf0yY9kcJ6wid27l9MRIv3d4rBh0Sx
Gfn5+WVNrRWpCiLF4qxgi0ZlnRjT9N3V1mCdRVFs3Rk/1JDhytz93sH8eCKi+JydFee6pSUsR7pF
URQ7y+KJyFLRLfrObdITUXxW8U5pxQrrCV/wzStSq0c2SHBT6jOyYon0+ed8nRUWIo01f2j9oXcA
sak4XtrQmzZlLVYQKSz7O8Vw28hXnUpEscVNoti0SU9EpM/YeaS6LENP0kpF37lNsVKTF2+y6olo
8c7Re++YDRdyfulta6zFFUcqdmboiaKsRzrH7EUhW8ZXnxNF0tvJz9n0q3d3WqOIKMq6qay6U/Q1
FVtiYy3F50Y1G0zacBANDg5mZGRkZGSsXr36lVdeCZtT3RWLiSgqpz5Ui0v7q7RfBSMkOGd3a1Nr
d3dn04myjKjgfh6c2Vo9embpkxwsQ+yutiqknJIe6fPP+URR7K5OVRAtruj0nchQEJE+NWtT2ZET
TSHCU9o5pQ+W9BmIypKqdMQytOOLoq+ztbXT191af2RT/NDUEZ+MECUOTZeeBT/mTcWx0ruQgmGo
cGnHvq6cGmph6Vl8WevwE4W1ekzbS41Hlv2dvtadscEvhxA5NbzVfPVZUcPtECanRFH0dTa1dvs6
m+orcvRDJU2UU/kV+Xqi2OL6I1YFKVL377cMLRVqB5BaZXhD1x+pqD7X6gu7jcbllFTw0O5S7ZOC
Yqhp67Oixm270Rsu9PzBb8koS8amsooT5zp94/ei0C0TzKng2x3V+nDTwuVU2OM+Ja9TEAku4WoP
VrDX1I08E8IbdEppViUR+YlIOHWgMNM8Y8bsBWlFNV1EIw7beR0/dmaXm4g3SMMqvCHOIM3od7v6
iNp2PTCN47gZy9/vI3KfclNyaWW+Jart/Te25X3/kQUzdMkl1xwo4+N4JREplUrF0CRXTUl2smHa
jLlLsg9McqBWF6cjIvILzW4isn1/Bsdx3ILNLURdzW6/0CwQkc6sIyJSGqT/jzPBRet1ZqkRpeIb
8+ZyHMfNXneaqM/lGnNAo0wsyIwispVXnaosbyGy5KYbJipzAiMq5G+uLMxOnDFt9oIlBVUT3c7o
aj0M6clR1HKgtLSuj8yZiVeHkkLuAIJLuLqz8InpmWnm4bG0ENtoDIXOwBORUqlTDpXX3EXU89Yj
0ziOm7bkjS4itz3E6bbghgszP5mLDmxK1XfZ3tqWt/qRB2bziQV1Y4fCJ2gZjcFwewbQvqHC55Q5
M5Gor66kMjhQ4j9VlL78kQVxuVePvZWjPw1+e0nmU281m3fWd/r8p4piiUbue8qxHx3eoCPqapaS
T2i2u4Lz6XQKIn1OdVNra2tTfXX1iXN1RWal4DekF9V0djadqNiZE6/oshUVjtuvxr2HMasUqnIz
d9ko+0irT3RXpoX7ZIyiUCqD+RqnIyJL2bnW1tamcyeqq+ubKtN4Po4nab8nIn9znWt0iwQDSnCH
rWqweJKKjy+ub2ptbW06UV194lxVpmHszObcTD3R6cLskhZSpBakh45F5bi2liYTEfkFPxEF85WI
yHUgc90bp3WbTnT6xObSxeHqObqkuEyzglresvVQbPrVmAqzA/AGnqirWfp1hLsyNzO74OroW+iq
Tog3RBFprBXSlqiuPlF/qjRxXDGK4VYIPb8g8GmFla2dTfVHyvItmr7TuwqrRn8vTNQy0tct3Crh
x9F16aXFi6nPlrcgLjEtPc1sWLKrjRSWosLxu8QwPxH1+f1+t/1AUWmL9DjcvMq47HQ9kS07LbMg
Nz0x8/1g54tPLEjXUNuBwpLKmsqS7OXLH0kvafa7a3ITH3kkMb2kzi193kgXN7rTIO05p0pyC8P+
wMDv9/cR+f1+obmmtKimj8gv+P1ESl5D1HKgMLf0VPjoi0vPjieyFRWWV9WUF6YvX56cWyOQLi3X
oqCWzenpBYW5aWmvDp38UuoSDUR9RwtzS0oLM7PfuuYpJKmgxpLC0qqaysLs5csfya4cf6Jfac7N
jCXqaeshRVpB8jW/0pW8kqirrqigpMatMxgURKeLCkpKS7Izy4f7B/5g07ibq4pKTtPVdJ0Ab840
ExFRVFrayPOMoXYApTk7N57Ilp2cnpubmZb96luVdrqJj7kU1j1Hi0rKa6pKc9OXP5JWNFHfOvT8
wqnC5EceWZJeWOPyS4GmMRj40XvRpFpGWsJeXlBQ3uz3N5ckx8VNoq8P12nMYeFovqaK/NTg2RyN
3pKz/9zwuIeGKEoaQ5IO2KOy6n1iZ3X+YmmQMd6alaqRhpnCzSyKvqb9WfEaIoqy5BRnjRjPOVeW
JZVDUYuz9jf5RFEUu0/szIiXJir0w1NHaN0vjWPH5tR3V1uIKFYaXvVVpyqGxj7OlVljFUSk0Fuy
MmIpOCjdfSJ/sYaINKn7RzbDuCGazupNwdZQxKYWnxgaoKjOt0QRkSLWmp8TS9L4lCj6moLrirLk
lxXHhxufGjGw0X1ipzVeKl5vyT8ScosMDTMNNZUohhifulqm71yxVLfFxU0+sbUi2NyLs3butCiC
41Pd9cWpeiIiTWxqlnVoK0w0PrXpXPB1aVZpcElaf6gdYKjlFMGVSC3nC7eNxoxPScP6Y8YfWyty
LHqpvPiMnfVjhivH1T3k/L5zZVmLpYkUNTx1xF7kC9kywT346sBt55EsaTtnVHeP2r3h+oUbn+JE
USSicy2uW35dF39zVaWdDObk5Die/PZcwwOvUs45V7n5zu1Ruw8kzl53enFF96lMjF4AXLfhIBKH
7t931113PfTQQ7fz/n3uqoLVb/QoLPkl6brmygNdpLGmX/8PIAHg6+425pQyubSqWCgoPbrrKRuR
JtZaXH4gDd0QABiF47jbej9kPrmwyl54O2vwFdNlnxKzb3clAL5GOI6bPXv21+t6CQDwdYScAgDW
IacAgHXIKQBgHXIKAFiHnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYxn1PNJWZdWkF2cpxBx/NXr+Qv
nCrNTBy6YVeydMk8f00aH5dbkG42GHilLrGgqqY002zQ8Upd8tCdI9w1hUN3Yghx/X8AYNJtyKmq
qqoHHnhg7ty5VVVVk5nf3/X+qbhSu8st2Iv85bkldj/5TxVmFrozawRRFLtr0t3bcsulqwj7W6rc
2TUul/tUtnvX93Obs+tcbvepbHdRQZWbqLk0Lf0AX2j3ib7mUkNlevpkb/AHALfTbcipgoICu93u
crleeOGFSS2gWJybbVYSkSE52eBudhMpzUV1zVUFZqVfcLtJxyuGr+MelVaQpiNSxiXHKfTpuck8
kTIu2aAU3AI1HyhvNpeUZscpSWlILypJtJdX3p67WQLA9bid10uQLtE3CUrd0OVe+KE7ELjrinKL
Kk+7+VizWefuG7ovgXS1/+DjMddw08vLAAAPQElEQVTe9rtd7r7T62Zz64YnRRkEf/C64QDArNvQ
nyotLY2Pj9fr9UVFRTdYRHNR5rpTiQfcot/dfKqyMG7U/SLCLsXreE3w7kyiKIrdrU32ENf/BwDW
3IacSk9Pl4770tPTb6wEvyAIpDTolER+d11J4fsT3S9imDIuO1NXV1hY1ewnEk6Vpscl5l7zljUA
cPsxf74vFGViUXmOsmTBNE6pMxe60rJi/S7XtRNHaS6pqUxzFZincdyMtHK+qOZAOq4fCsC+23gf
BwCAUcbfxyEiImLVqlV3ZH8KAL5RkFMAwDrkFACwDjkFAKxDTgEA65BTAMAuURQ7OzuRUwDANFEU
kVMAwDrkFACwDjkFAKxDTgEA65BTAMA65BQAsA45BQCsQ04BAOuQUwDAOuQUALAOOQUArENOAQDr
kFMAwDrkFACwDjkFAEy7ePEicgoA2MVx3P3334+cAgCmcRyHnAIApg0ODiKnAIB1yCkAYB1yCgBY
h5wCAKbhfjMAcAdATgEA09CfAoA7AHIKAFiHnAIA1iGnAIBpGJ8CgDsAcgoAmIb+FADcAZBTAMA0
9KcA4A6AnAIA1iGnAIB1yCkAYBrGpwDgDoCcAgCmoT8FAHcA5BQAsEsUxU8//RQ5BQBMk8vlyCkA
YFpkZCRyCgCYhpwCANYhpwCAdRzHIacAgHXIKQBgHXIKAFiHnAIA1iGnAIB1yCkAYB1yCgBYh5wC
ANYhpwCAdcgpAGAdcgoAmIa/mwEApnEcN3v2bOQUALAOOQUArENOAQDrkFMAwDrkFACwDjkFAKxD
TgEA65BTAMA65BQAsA45BQCsQ04BAOuQUwDAOuQUALAOOQUArENOAQDrkFMAwDrkFACwDjkFAKxD
TgEA65BTAMA65BQAsA45BQCsQ04BAOuQUwDALlEUOzs7kVMAwDRRFJFTAMA65BQAsA45BQCsQ04B
AOuQUwDAOuQUALAOOQUArENOAQDrkFMAwDrkFACwDjkFAKxDTgEA65BTAMA65BQAsA45BQBMu3jx
InIKANjFcdz999+PnAIApnEch5wCAKYNDg4ipwCAdcgpAGAdcgoAWIecAgCm4X4zAHAHQE4BANPQ
nwKAOwByCgBYh5wCANYhpwCAaRifAoA7AHIKAJiG/hQA3AGQUwDANPSnAOAOgJwCANYhpwCAdcgp
AGAaxqcA4A6AnAIApqE/BQB3AOQUALBLFMVPP/0UOQUATJPL5cgpAGBaZGQkcgoAmIacAgDWIacA
gHUcxyGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGAdcgoAWIecAgDWIacAgHXIKQBgHXIKAFiH
nAIA1iGnAIBdHMfp9XrkFACwDjkFAKxDTgEA65BTAMA65BQAsA45BQCsQ04BAOuQUwDAOuQUALAO
OQUArENOAQDrkFMAwDrkFACwDjkFAKxDTgEAu0RRtNvtyCkAYBfHcfPnz0dOAQDTFAoFcgoAmBYR
EYGcAgDWIacAgHXIKQBgHXIKAFiHnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGAdcgoA
WIecAgDWIacAgHXIKQBgHXIKAFiHnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAXbjPKACwjuM4
s9mMnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGAdcgoAWIecAgDWIacAgHXIKQBgHXIK
AFiHnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGCXKIoXLlxATgEAuziOmz17NnIKAFiH
nAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGBa38UW5BQAsEz8rz90IacAgGXcXz+8FDkF
AGy76y7kFACwDjkFAExzu7uRUwDAMvFzxx+QUwDAMm5hfBRyCgCYprgnFjkFAKxDTgEA65BTAMA6
5BQAsA45BQCsQ04BAOuQUwDAOuQUALBLFMXOzk7kFACwi+O4OXPmIKcAgHXIKQBgHXIKAFiHnAIA
1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGAdcgoAWIecAgDWIacAgHXIKQBgHXIKANgliqLd
bkdOAQC7OI6bP38+cgoAmKZQKJBTAMC0iIgI5BQAsA45BQCsQ04BAOuQUwDAOuQUALAOOQUArENO
AQDrkFMAwDrkFACwDjkFAKxDTgEA65BTAMA65BQAsA45BQCsQ04BAOuQUwDAOuQUALAOOQUArENO
AQDrkFMAwDrkFACwDjkFAOzCfUYBgHUcx5nNZuQUALAOOQUArENOAQDrkFMAwDrkFACwDjkFAKxD
TgEA65BTAMA65BQAsA45BQCsQ04BAOuQUwDAOuQUALAOOQUArENOAQC7cP0pAGAdx3Hz589HTgEA
0xQKBXIKAJgWERGBnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGAdcgoAWIecAgDWIacA
gHXIKQBgHXIKAFiHnAIA1iGnAIB1yCkAYB1yCgBYh5wCANYhpwCAdcgpAGAdcgoAWBc5/Ohci+v2
VQMAIKxgTj0Qa7it1QAACAH3GQUA1nEcZzabkVMAwDrkFACwDjkFAKxDTgEA6yKvPQsA3ISzZ89O
XeEPPvjgrVzdzRtf4clATgFMobNnzz744IP9/YGpKNzj+cuhQ4d+9KMf3ZrV3bzxFZ4k5BTA1BKJ
RHFqSg5V7tSt7uaFrPBkIKcAppgoTlFyhP7YT9nqbh5yCoBVojg4VTl1S1d38264XjjfBzC1RBrq
44z+57btXmuZP0OlVKk0cxf/z+f/vdkffKl5u1mTuMc1NKfvd9uXzZjxnWdt3WNKEAdDH/eFXJ2/
+d+fX/tw3AylSqWcEbfsyUN2IdRsX8m/tn/7zgzLQfe46SErPBnIKYApJ4775/qPv39g+Q7huz+v
+X3rZw7bqz/UvJtjtjx/2icFzdU5hdPPp6buoH+sef8XD/NjywnTPxm/Ot8n2y1L/uE3mvWv1rd+
9llzzT/G/i7HYn2lZfycX9W/0DW50Q4Vcgpgag2KJA6Ko/5dev/5/P97f1ld1QurEoxRs/7q/pQn
XvuwYlXXyxv2NovioChKn+nB7t88m5r2pmbbh0efXqQZW0iYo7sQqxt0Hdrw8wt/+9aR0h9+1xg1
SzsnYc3uyrL/ofyk0TUoioOud5599IHoaWr1NHX04rUHm32Dotj8z4lzH332J3/zwAJDdLQh7dn3
3YOiOCi6bf+ckahTq6fNnPs3z77jEgdFsdf1H8/+zYKZ09Rq3QN//8+nLw3Vn8bV9iYOR5FTAFNM
HHv84/vDm7X+7zzxt3NGTtR8d+OqOZ+8+ZFLDHZH3O/mJy//V8UL1ZWPz1eEPL6ikIdR41YnCqff
blAuffy7mhET5/zwjWPlq+aQ6H7zJ9m/nlVy1nvlysWzJbNqC579jSCKRP4/H2+Y/4vf/tdnn//2
f/v3bdjxiU/sOpS1+hXx8Zo/Xbn4X7vmvJ39k0NdvtP/JzXn+PxtZy9e+dOJp+iV76/7d7dUN6LJ
VngSkFMAU0sUx3YrfN1dPco5s+Sjp8tnGTXU0yL4BkWR+v5QnJ5dG7VQ0fDLd1t84zsmwe5JqPGp
casTu4Ue0sz5VuhCZq04cKZ2V4pW9HW6BUWUhnq6fKI4KJIi4cerFyoGRfGvli6d0/WpW3T/Zm/D
rP/1wrqFmkHFrBXbjtWWpsga973d9d2SF1b+lUKcPv+x4o2zPvqX453D/cFJVXgycL4PYGqJojhm
XEahmaXxd7n9oqgYMdXv/rSHNMbpCrGPiGjOj3/1660LG7ITVmVuWvTb0qV8qJIntTrSRGkUPV09
o6f7u9p6NPooJfkb9z2xaf8HTpqzcKFR2dNHQwUoZk0PLqEh8oui0CZQ1Bzt0DTjQg31/PrCn/s+
/sG906+Wa2wRfA+JQxWZRIUnJorihQsX0J8CmGLi2OFkxcLHlio/KnuvbeRE4cy//uqC8e+W6kUi
kRQL1zy+VEOalNLX19H+x3Lf7goxVh16wGfc6kRNwpqF/g9e/0gYMbHtP7MTvpN9vEf49RPr9tPT
x91X2j45+evipbMUUhnB/44cF58exVPXha7glK7jO7f88s/TZmlm/d2vPYJwRRCuCMKfPvmkunC+
ksYtPlGFJ8Rx3OzZs5FTAFMrxIHYt1Je+qfvfvLEijUvv/uJyy10tnz0Zv7KH/xSk1u+bl5wvHlo
Zs1D/3S4YNZ7T/z4X1p6x5UT8vfoIQ64Zn3/n56Y9Z+rf1D45kfNbndny0f7cn7wRMPC51546Ft+
n9+vmK6ZLhfF7sZfFv3S2ef3+YI/IBg5/i2K4qxHfryo619K3mzxDfrc77701M4GUZa07vvK9wpf
Ot7pEwfdHxWuSEjZ0tAbrFeI4z6c7wNg0/j+DZE4Z82vfvdOnuaDwr81Gw0LHnq8vGvpjo+Ov7RI
c7X7MjSzcuGmQ8ULf1v4w22/84c++3/t1SkTio6/X2L8Q/Fjica4BYtWlV5Y+krtu3lGJc1a+VLR
ojPr4mZqdfHr/nPODx6a3nWmy3e1sJH1iVqz780nenY9NHvmrO9sbfvBvtfXzFEsfelXLy/66Gnz
rBkzjY9/vPCl/9i5VDm+NzZRhSeDu+GEA4BrOnv27IIF/83v75uKwj2ei7/73W/H/B3y1K3u5o2v
cDiiKK5evZqIIiIiVq1ahXF0gKl247/DvmbBt3R1N2/SvaLh/hPHcYTzfQBTTRy8pX8XfItXN0X6
+oJdwunTpxNyCmCqiTc3NDNhybd0dTdv8tW6dOmS9CAyMpKQUwBTTRRv5Hz8ZIu+lau7eZOu2Icf
fig90Ol0RMQFAgEpsQBgKhw6dGjqCh8/Jj2lq7t5kxlEHxgYyM7OHhgYIKKVK1fefffd3AcffPC9
731v6qsHADApH3744WuvvUZE99xzT0pKChHd9dprr12+fPl2VwwAgIjo8uXLUkgR0f333y89uIuI
nnvuuUCA0Qu/A8A3RyAQeO6556THs2fPnjVrlvT4LiISBOGnP/1pT0/PbasdAHzj9fT0/PSnPxUE
gYjUanVCQsLwS3fJIjhpjpycnA8//FAauwIAuGUGBgY+/PDDnJwcqbckk8mWLl06bdq04Rm4gwcP
2my2L774QnoeGRmZmZmZkJAwc+ZMnAcEgKkzMDBw6dKlhoaGysrK4R7S3Xff/dBDD/H8qOvY/H8e
9H5gAIMRxgAAAABJRU5ErkJggg==
--=_c2ba53d929d9e1da449915d6c86f4971--
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Team, Is just me or the documentation pages are not being updated ? many
are outdated.. how can we collaborate?
whats up with http://www.ovirt.org/documentation/admin-guide/ ?
regards,
JP
                    
                  
                  
                          
                            
                            8
                            
                          
                          
                            
                            14
                            
                          
                          
                            
    
                          
                        
                    
                    
                        I am doing some testing with our current ovirt setup and i am seeing some 
lagging going on when i attempt to launch or access files from a network 
share, or even run windows updates.  
My current setup is 4 X 1 GB Nic bond with multiple Vlan's attached.  Server 
usage is currently low. I have also not setup any additional Network QoS and 
everything else is set to default.
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            4
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
We have a test environment build on ovirt 3.5 and now we have installed a
new platform build on ovirt 4.1 and we want to import the test VMs into it.
The question : do we have to update the test environment to be able to
import the vms?
If yes, to what version do we have to update? The minimum version needed
Regards
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        We have three gluster shares (_data, _engine, _export) created by a 
brick located on three of our VM hosts. See output from "gluster volume 
info" below:
  Volume Name: data
Type: Replicate
Volume ID: c07fdf43-b838-4e4b-bb26-61dbf406cb57
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick2/data
Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick2/data
Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: engine
Type: Distributed-Replicate
Volume ID: 25455f13-75ba-4bc6-926a-d06ee7c5859a
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick1/engine
Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick1/engine
Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick1/engine (arbiter)
Brick4: vmhost04-chi:/mnt/engine
Brick5: vmhost05-chi:/mnt/engine
Brick6: vmhost06-chi:/mnt/engine (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: export
Type: Replicate
Volume ID: a4c3a49a-fa83-4a62-9523-989c8e016c35
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick3/export
Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick3/export
Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick3/export (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Our issue is that we ran out of space on our gluster-engine bricks which 
caused our Hosted Engine vm to crash. We added additional bricks from 
new VM Hosts (see vmhost05 to vmhost06 above) but we still are unable to 
restart our Hosted Engine due to the first three space being depleted. 
My understanding is that I need to extend the bricks that are 100% full 
on our engine partition. Is it the best practice to stop the glusterd 
service or can I use "gloster volume stop engine" to only stop the 
volume I need to extend? Also, if I need to stop glusterd will my VMs 
hosted on my ovirt cluster be affected by mount points export and data 
being off line?
Thanks,
Ryan
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    
                    
                        On Thu, May 4, 2017 at 1:41 AM Brahim Rifahi <brahim.rifahi9(a)gmail.com>
wrote:
> same problem :/
>
>
I can't help much without logs, outputs
> 2017-05-03 11:10 GMT+01:00 Roy Golan <rgolan(a)redhat.com>:
>
>>
>> http://www.ovirt.org/documentation/quickstart/quickstart-guide/#prerequisit…
>>
>> On Wed, May 3, 2017 at 1:01 PM Brahim Rifahi <brahim.rifahi9(a)gmail.com>
>> wrote:
>>
>>> sudo yum install -y ovirt-engine
>>> You do not want to work
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        --_000_149441484848174342leedsbeckettacuk_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
            I can create quotas with the SDK ( python) and make a user a co=
nsumer of the quota in the admin portal but don't know how to do it with th=
e SDK or API.
Any help appreciated.
Thanks,
             Paul S.
To view the terms under which this email is distributed, please go to:-
http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html
--_000_149441484848174342leedsbeckettacuk_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none"><!--P{margin-top:0;margin-b=
ottom:0;} --></style>
</head>
<body dir=3D"ltr" style=3D"font-size:12pt;color:#000000;background-color:#F=
FFFFF;font-family:Calibri,Arial,Helvetica,sans-serif;">
<p>Hello, </p>
<p>            I can create quotas with the S=
DK ( python) and make a user a consumer of the quota in the admin portal&nb=
sp;but don't know how to do it with the&=
nbsp;SDK or API.</p>
<p><br>
</p>
<p>Any help appreciated.</p>
<p><br>
</p>
<p><br>
</p>
<p>Thanks,</p>
<p>            =
 Paul S. <br>
</p>
To view the terms under which this email is distributed, please go to:- <br=
>
<a href=3D"http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html"=
 target=3D"_blank">http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaim=
er.html</a>
<p></p>
</body>
</html>
--_000_149441484848174342leedsbeckettacuk_--
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hello ovirt users,
First off all, thanks for your work. I've been using the software for a few
months and the experience has been great.
I'm having a hard time trying to set the group on a glusterfs volume
PLAY [master]
******************************************************************
TASK [Sets options for volume]
*************************************************
failed: [192.168.170.141] (item={u'key': u'group', u'value': u'virt'}) =>
{"failed": true, "item": {"key": "group", "value": "virt"}, "msg":
"'/var/lib/glusterd/groups/virt' file format not valid.\n"}
changed: [192.168.170.141] => (item={u'key': u'storage.owner-uid',
u'value': u'36'})
changed: [192.168.170.141] => (item={u'key': u'storage.owner-gid',
u'value': u'36'})
changed: [192.168.170.141] => (item={u'key': u'network.ping-timeout',
u'value': u'30'})
changed: [192.168.170.141] => (item={u'key':
u'performance.strict-o-direct', u'value': u'on'})
changed: [192.168.170.141] => (item={u'key': u'network.remote-dio',
u'value': u'off'})
changed: [192.168.170.141] => (item={u'key':
u'cluster.granular-entry-heal', u'value': u'enable'})
        to retry, use: --limit @/tmp/tmpdTWQ8B/gluster-volume-set.retry
PLAY RECAP
*********************************************************************
192.168.170.141            : ok=0    changed=0    unreachable=0    failed=1
I've tried to remove remove glusterfs, wiping the glusterfs configurations
and reinstalling the service.
Any help would be appreciated.
Thank you,
Joel
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hello my name is Vincent
¿I want to know the location of the discs of ovirt 3.5?
When cloning a VM, the disk is cloned, it is cloned with the same capacity,
I moved the disk to a nfs drive and it tells me that it only weighs 4k
Regards
Vincent Romero
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    09 May '17
                    
                        --_000_99294B0B51BA4CA6B4F3169AC046B619ingramcontentcom_
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64
U28gSSBzdWNjZXNzZnVsbHkgdXBncmFkZWQgbXkgZW5naW5lIGZyb20gNC4wNiB0byA0LjEuMSB3
aXRoIG5vIG1ham9yIGlzc3Vlcy4NCg0KQSBuaWNlIHRoaW5nIEkgbm90aWNlZCB3YXMgdGhhdCBt
eSBjdXN0b20gQ0EgY2VydGlmaWNhdGUgZm9yIGh0dHBzIG9uIHRoZSBhZG1pbiBhbmQgdXNlciBw
b3J0YWxzIHdhc27igJl0IGNsb2JiZXJlZCBieSBzZXR1cC4NCg0KSSBkaWQgaGF2ZSB0byByZXN0
b3JlIG15IGN1c3RvbSBzZXR0aW5ncyBmb3IgSVNPIHVwbG9hZGVyLCBsb2cgY29sbGVjdG9yLCBh
bmQgd2Vic29ja2V0IHByb3h5Og0KY3AgL2V0Yy9vdmlydC1lbmdpbmUvaXNvdXBsb2FkZXIuY29u
Zi5kLzEwLWVuZ2luZS1zZXR1cC5jb25mLjxsYXRlc3RfdGltZXN0YW1wPiAvZXRjL292aXJ0LWVu
Z2luZS9pc291cGxvYWRlci5jb25mLmQvMTAtZW5naW5lLXNldHVwLmNvbmYNCmNwIC9ldGMvb3Zp
cnQtZW5naW5lL292aXJ0LXdlYnNvY2tldC1wcm94eS5jb25mLmQvMTAtc2V0dXAuY29uZi48bGF0
ZXN0X3RpbWVzdGFtcD4gL2V0Yy9vdmlydC1lbmdpbmUvb3ZpcnQtd2Vic29ja2V0LXByb3h5LmNv
bmYuZC8xMC1zZXR1cC5jb25mDQpjcCAvZXRjL292aXJ0LWVuZ2luZS9sb2djb2xsZWN0b3IuY29u
Zi5kLzEwLWVuZ2luZS1zZXR1cC5jb25mLjxsYXRlc3RfdGltZXN0YW1wPiAvZXRjL292aXJ0LWVu
Z2luZS9sb2djb2xsZWN0b3IuY29uZi5kLzEwLWVuZ2luZS1zZXR1cC5jb25mDQoNCk5vdyBJ4oCZ
bSBtb3Zpbmcgb24gdG8gdXBkYXRpbmcgdGhlIG9WaXJ0IG5vZGUgaG9zdHMsIHdoaWNoIGFyZSBj
dXJyZW50bHkgYXQgb1ZpcnQgTm9kZSA0LjAuNi4xLiAoSeKAmW0gYXNzdW1pbmcgSSBzaG91bGQg
ZG8gdGhhdCBiZWZvcmUgYXR0ZW1wdGluZyB0byB1cGdyYWRlIHRoZSBjbHVzdGVyIGFuZCBkYXRh
IGNlbnRlciBjb21wYXRpYmlsaXR5IGxldmVsIHRvIDQuMS4pDQoNCldoZW4gSSByaWdodC1jbGlj
ayBvbiBhIGhvc3QgYW5kIGdvIHRvIEluc3RhbGxhdGlvbiAvIENoZWNrIGZvciBVcGdyYWRlLCB0
aGUgcmVzdWx0cyBhcmUg4oCYbm8gdXBkYXRlcyBmb3VuZC7igJkgV2hlbiBJIGxvZyBpbnRvIHRo
YXQgaG9zdCBkaXJlY3RseSwgSSBub3RpY2UgaXTigJlzIHN0aWxsIGdvdCB0aGUgb1ZpcnQgNC4w
IHJlcG8sIG5vdCA0LjEuIElzIHRoZXJlIGFuIGV4dHJhIHN0ZXAgSeKAmW0gbWlzc2luZz8gVGhl
IGRvY3VtZW50YXRpb24gSeKAmXZlIGZvdW5kIChodHRwOi8vd3d3Lm92aXJ0Lm9yZy9kb2N1bWVu
dGF0aW9uL3VwZ3JhZGUtZ3VpZGUvY2hhcC1VcGRhdGVzX2JldHdlZW5fTWlub3JfUmVsZWFzZXMv
KSBkb2VzbuKAmXQgbWVudGlvbiB0aGlzLg0KDQoNCioqDQpJZiBJIGNhbiBvZmZlciBzb21lIHVu
c29saWNpdGVkIGZlZWRiYWNrOiBJIGZlZWwgbGlrZSB0aGlzIGxpc3QgaXMgcG9wdWxhdGVkIHdp
dGggYSBsb3Qgb2YgcXVlc3Rpb25zIHRoYXQgY291bGQgYmUgYXZlcnRlZCB3aXRoIGEgbGl0dGxl
IGNhcmUgYW5kIGZlZWRpbmcgb2YgdGhlIGRvY3VtZW50YXRpb24uIEl04oCZcyB1bmZvcnR1bmF0
ZSBiZWNhdXNlIHRoYXQgbWFrZXMgZm9yIGEgcm9ja3kgaW50cm9kdWN0aW9uIHRvIG9WaXJ0LCBh
bmQgaXQgbWFrZXMgaXQgbG9vayBsaWtlIGEgbmVnbGVjdGVkIHByb2plY3QsIHdoaWNoIEkga25v
dyBpcyBub3QgdGhlIGNhc2UuDQoNCk9uIGEgcmVsYXRlZCBub3RlLCBJIGtub3cgdGhpcyBoYXMg
YmVlbiBkaXNjdXNzZWQgYmVmb3JlIGJ1dOKApg0KVGhlIGNlbnRyYWxpemVkIGNvbnRyb2wgaW4g
R2l0aHViIGZvciB0aGUgZG9jdW1lbnRhdGlvbiBkb2VzIG5vdCByZWFsbHkgZW5jb3VyYWdlIHVz
ZXIgY29udHJpYnV0aW9ucy4gV2hhdOKAmXMgd3Jvbmcgd2l0aCBhIHdpa2k/IElmIHdl4oCZcmUg
cmVhbGx5IGNvbmNlcm5lZCBhYm91dCBiYWQgb3IgbWFsaWNpb3VzIGVkaXRzIGJlaW5nIHBvc3Rl
ZCwga2VlcCB0aGUgb2ZmaWNpYWwgaW4gZ2l0IGFuZCBhZGQgYSBzZXBhcmF0ZSB3aWtpIHRoYXQg
aXMgY2xlYXJseSBtYXJrZWQgYXMgdXNlci1jb250cmlidXRlZC4NCioqDQoNCg0KVGhhbmtzLA0K
RGFuaWVsDQo=
--_000_99294B0B51BA4CA6B4F3169AC046B619ingramcontentcom_
Content-Type: text/html; charset=UTF-8
Content-ID: <A36A61E118072E419909BC293B56461B(a)namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4
bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo
dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo
dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp
dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l
dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu
dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg
KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N
CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0
IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ
cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8N
CnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBpbjsN
CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWls
eTpDYWxpYnJpO30NCmE6bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXByaW9y
aXR5Ojk5Ow0KCWNvbG9yOiMwNTYzQzE7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQph
OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQNCgl7bXNvLXN0eWxlLXByaW9yaXR5
Ojk5Ow0KCWNvbG9yOiM5NTRGNzI7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpzcGFu
LkVtYWlsU3R5bGUxNw0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1jb21wb3NlOw0KCWZvbnQt
ZmFtaWx5OkNhbGlicmk7DQoJY29sb3I6d2luZG93dGV4dDt9DQouTXNvQ2hwRGVmYXVsdA0KCXtt
c28tc3R5bGUtdHlwZTpleHBvcnQtb25seTsNCglmb250LWZhbWlseTpDYWxpYnJpO30NCkBwYWdl
IFdvcmRTZWN0aW9uMQ0KCXtzaXplOjguNWluIDExLjBpbjsNCgltYXJnaW46MS4waW4gMS4waW4g
MS4waW4gMS4waW47fQ0KZGl2LldvcmRTZWN0aW9uMQ0KCXtwYWdlOldvcmRTZWN0aW9uMTt9DQot
LT48L3N0eWxlPg0KPC9oZWFkPg0KPGJvZHkgYmdjb2xvcj0id2hpdGUiIGxhbmc9IkVOLVVTIiBs
aW5rPSIjMDU2M0MxIiB2bGluaz0iIzk1NEY3MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEi
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPlNv
IEkgc3VjY2Vzc2Z1bGx5IHVwZ3JhZGVkIG15IGVuZ2luZSBmcm9tIDQuMDYgdG8gNC4xLjEgd2l0
aCBubyBtYWpvciBpc3N1ZXMuDQo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MS4wcHQiPkEgbmljZSB0aGluZyBJIG5vdGljZWQgd2FzIHRoYXQgbXkgY3VzdG9tIENBIGNlcnRp
ZmljYXRlIGZvciBodHRwcyBvbiB0aGUgYWRtaW4gYW5kIHVzZXIgcG9ydGFscyB3YXNu4oCZdCBj
bG9iYmVyZWQgYnkgc2V0dXAuDQo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNv
Tm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MS4wcHQiPkkgZGlkIGhhdmUgdG8gcmVzdG9yZSBteSBjdXN0b20gc2V0dGluZ3MgZm9yIElTTyB1
cGxvYWRlciwgbG9nIGNvbGxlY3RvciwgYW5kIHdlYnNvY2tldCBwcm94eTo8bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEx
LjBwdCI+Y3AmbmJzcDsvZXRjL292aXJ0LWVuZ2luZS9pc291cGxvYWRlci5jb25mLmQvMTAtZW5n
aW5lLXNldHVwLmNvbmYuJmx0O2xhdGVzdF90aW1lc3RhbXAmZ3Q7IC9ldGMvb3ZpcnQtZW5naW5l
L2lzb3VwbG9hZGVyLmNvbmYuZC8xMC1lbmdpbmUtc2V0dXAuY29uZjxvOnA+PC9vOnA+PC9zcGFu
PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0
Ij5jcCZuYnNwOy9ldGMvb3ZpcnQtZW5naW5lL292aXJ0LXdlYnNvY2tldC1wcm94eS5jb25mLmQv
MTAtc2V0dXAuY29uZi4mbHQ7bGF0ZXN0X3RpbWVzdGFtcCZndDsgL2V0Yy9vdmlydC1lbmdpbmUv
b3ZpcnQtd2Vic29ja2V0LXByb3h5LmNvbmYuZC8xMC1zZXR1cC5jb25mPG86cD48L286cD48L3Nw
YW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4w
cHQiPmNwJm5ic3A7L2V0Yy9vdmlydC1lbmdpbmUvbG9nY29sbGVjdG9yLmNvbmYuZC8xMC1lbmdp
bmUtc2V0dXAuY29uZi4mbHQ7bGF0ZXN0X3RpbWVzdGFtcCZndDsgL2V0Yy9vdmlydC1lbmdpbmUv
bG9nY29sbGVjdG9yLmNvbmYuZC8xMC1lbmdpbmUtc2V0dXAuY29uZjxvOnA+PC9vOnA+PC9zcGFu
PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0
Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3Bh
biBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+Tm93IEnigJltIG1vdmluZyBvbiB0byB1cGRhdGlu
ZyB0aGUgb1ZpcnQgbm9kZSBob3N0cywgd2hpY2ggYXJlIGN1cnJlbnRseSBhdCBvVmlydCBOb2Rl
IDQuMC42LjEuIChJ4oCZbSBhc3N1bWluZyBJIHNob3VsZCBkbyB0aGF0IGJlZm9yZSBhdHRlbXB0
aW5nIHRvIHVwZ3JhZGUgdGhlIGNsdXN0ZXIgYW5kIGRhdGEgY2VudGVyIGNvbXBhdGliaWxpdHkg
bGV2ZWwgdG8NCiA0LjEuKTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bh
bj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBw
dCI+V2hlbiBJIHJpZ2h0LWNsaWNrIG9uIGEgaG9zdCBhbmQgZ28gdG8gSW5zdGFsbGF0aW9uIC8g
Q2hlY2sgZm9yIFVwZ3JhZGUsIHRoZSByZXN1bHRzIGFyZSDigJhubyB1cGRhdGVzIGZvdW5kLuKA
mSBXaGVuIEkgbG9nIGludG8gdGhhdCBob3N0IGRpcmVjdGx5LCBJIG5vdGljZSBpdOKAmXMgc3Rp
bGwgZ290IHRoZSBvVmlydCA0LjAgcmVwbywgbm90IDQuMS4gSXMgdGhlcmUNCiBhbiBleHRyYSBz
dGVwIEnigJltIG1pc3Npbmc/IFRoZSBkb2N1bWVudGF0aW9uIEnigJl2ZSBmb3VuZCAoPGEgaHJl
Zj0iaHR0cDovL3d3dy5vdmlydC5vcmcvZG9jdW1lbnRhdGlvbi91cGdyYWRlLWd1aWRlL2NoYXAt
VXBkYXRlc19iZXR3ZWVuX01pbm9yX1JlbGVhc2VzLykiPmh0dHA6Ly93d3cub3ZpcnQub3JnL2Rv
Y3VtZW50YXRpb24vdXBncmFkZS1ndWlkZS9jaGFwLVVwZGF0ZXNfYmV0d2Vlbl9NaW5vcl9SZWxl
YXNlcy8pPC9hPiBkb2VzbuKAmXQgbWVudGlvbg0KIHRoaXMuIDxvOnA+PC9vOnA+PC9zcGFuPjwv
cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij48
bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAg
Y2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPioqPG86cD48
L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMS4wcHQiPklmIEkgY2FuIG9mZmVyIHNvbWUgdW5zb2xpY2l0ZWQgZmVlZGJhY2s6IEkg
ZmVlbCBsaWtlIHRoaXMgbGlzdCBpcyBwb3B1bGF0ZWQgd2l0aCBhIGxvdCBvZiBxdWVzdGlvbnMg
dGhhdCBjb3VsZCBiZSBhdmVydGVkIHdpdGggYSBsaXR0bGUgY2FyZSBhbmQgZmVlZGluZyBvZiB0
aGUgZG9jdW1lbnRhdGlvbi4gSXTigJlzIHVuZm9ydHVuYXRlIGJlY2F1c2UgdGhhdA0KIG1ha2Vz
IGZvciBhIHJvY2t5IGludHJvZHVjdGlvbiB0byBvVmlydCwgYW5kIGl0IG1ha2VzIGl0IGxvb2sg
bGlrZSBhIG5lZ2xlY3RlZCBwcm9qZWN0LCB3aGljaCBJIGtub3cgaXMgbm90IHRoZSBjYXNlLg0K
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZToxMS4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij5PbiBhIHJlbGF0ZWQg
bm90ZSwgSSBrbm93IHRoaXMgaGFzIGJlZW4gZGlzY3Vzc2VkIGJlZm9yZSBidXTigKY8bzpwPjwv
bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjExLjBwdCI+VGhlIGNlbnRyYWxpemVkIGNvbnRyb2wgaW4gR2l0aHViIGZvciB0aGUgZG9j
dW1lbnRhdGlvbiBkb2VzIG5vdCByZWFsbHkgZW5jb3VyYWdlIHVzZXIgY29udHJpYnV0aW9ucy4g
V2hhdOKAmXMgd3Jvbmcgd2l0aCBhIHdpa2k/IElmIHdl4oCZcmUgcmVhbGx5IGNvbmNlcm5lZCBh
Ym91dCBiYWQgb3IgbWFsaWNpb3VzIGVkaXRzIGJlaW5nIHBvc3RlZCwga2VlcCB0aGUNCiBvZmZp
Y2lhbCBpbiBnaXQgYW5kIGFkZCBhIHNlcGFyYXRlIHdpa2kgdGhhdCBpcyBjbGVhcmx5IG1hcmtl
ZCBhcyB1c2VyLWNvbnRyaWJ1dGVkLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0Ij4qKjxvOnA+PC9vOnA+PC9z
cGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEu
MHB0Ij48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdCI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQiPlRo
YW5rcyw8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjExLjBwdCI+RGFuaWVsPG86cD48L286cD48L3NwYW4+PC9wPg0KPC9k
aXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo=
--_000_99294B0B51BA4CA6B4F3169AC046B619ingramcontentcom_--
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            19
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Pic of the error
https://i.imgur.com/rhp3thT.png
Engine Log:
https://0bin.net/paste/Pz6QV5hPaGmksgnA#azl9UXo2M+ilLLMg31Wh+IymHHSWNsLlA1R…
Vdsm log
https://0bin.net/paste/1kLWT5btLQVa9el1#aQu+lviLYw-RZVxVJD8dYMI1juALurJI3vj…
Thank you appreciate it
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi
I was just wondering if anyone is running Ovirt using a shared SAS array
with the ability to live migrate between hosts ?
If so has anyone been able to get hosted engine working with it ?
Thanks
*Gary Lloyd*
________________________________________________
I.T. Systems:Keele University
Finance & IT Directorate
Keele:Staffs:IC1 Building:ST5 5NB:UK
+44 1782 733063 <%2B44%201782%20733073>
________________________________________________
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        --_004_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_
Content-Type: multipart/alternative;
	boundary="_000_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_"
--_000_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hello all,
I'm looking for an high level, approved, workflow to completely remove a da=
tacenter and all its object.
Datacenter is made by 1 cluster, 1 server, 1 local storage, no VMs.
Hosted engine is on a different data center.
Everything is nicely working.
The administration guide<http://www.ovirt.org/documentation/admin-guide/cha=
p-Data_Centers/> reads that:
"An active host is required to remove a data center. Removing a data center=
 will not remove the associated resources."
Actually, I *want* to remove all associated resources and forget about them=
.
So... is it correct to:
  1.  put the storage in maintenance,
  2.  then put datacenter on maintenance
  3.  finally remove the datacenter
will this cleanup also related storage and server? Yes, no? other suggested=
 actions?
Thank you
Andrea Ghelardi
[cid:image001.jpg@01D2C8F1.CD0FBD40]<https://www.iongroup.com/>
Via Ceci, 52
56125 Pisa
Italy
T: +39 050 22037 1
D: +39 050 2203 890
andrea.ghelardi(a)iongroup.com<mailto:andrea.ghelardi@iongroup.com>
iongroup.com<https://iongroup.com/>
***********************************************
This email and any attachments may contain information which is confidentia=
l and/or privileged.  The information is intended exclusively for the addre=
ssee and the views expressed may not be official policy, but the personal v=
iews of the originator.  If you are not the intended recipient, be aware th=
at any disclosure, copying, distribution or use of the contents is prohibit=
ed. If you have received this email and any file transmitted with it in err=
or, please notify the sender by telephone or return email immediately and d=
elete the material from your computer.  Internet communications are not sec=
ure and ION is not responsible for their abuse by third parties, nor for an=
y alteration or corruption in transmission, nor for any damage or loss caus=
ed by any virus or other defect. ION accepts no liability or responsibility=
 arising out of or in any way connected to this email.
***********************************************
--_000_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:#0563C1;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:#954F72;
	text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
	{mso-style-priority:34;
	margin-top:0in;
	margin-right:0in;
	margin-bottom:0in;
	margin-left:.5in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
/* List Definitions */
@list l0
	{mso-list-id:1659306173;
	mso-list-type:hybrid;
	mso-list-template-ids:-471968898 67698705 67698713 67698715 67698703 67698=
713 67698715 67698703 67698713 67698715;}
@list l0:level1
	{mso-level-text:"%1\)";
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;}
@list l0:level2
	{mso-level-number-format:alpha-lower;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;}
@list l0:level3
	{mso-level-number-format:roman-lower;
	mso-level-tab-stop:none;
	mso-level-number-position:right;
	text-indent:-9.0pt;}
@list l0:level4
	{mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;}
@list l0:level5
	{mso-level-number-format:alpha-lower;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;}
@list l0:level6
	{mso-level-number-format:roman-lower;
	mso-level-tab-stop:none;
	mso-level-number-position:right;
	text-indent:-9.0pt;}
@list l0:level7
	{mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;}
@list l0:level8
	{mso-level-number-format:alpha-lower;
	mso-level-tab-stop:none;
	mso-level-number-position:left;
	text-indent:-.25in;}
@list l0:level9
	{mso-level-number-format:roman-lower;
	mso-level-tab-stop:none;
	mso-level-number-position:right;
	text-indent:-9.0pt;}
ol
	{margin-bottom:0in;}
ul
	{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hello all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I’m looking for an high level, approved, workf=
low to completely remove a datacenter and all its object.<o:p></o:p></p>
<p class=3D"MsoNormal">Datacenter is made by 1 cluster, 1 server, 1 local s=
torage, no VMs.<o:p></o:p></p>
<p class=3D"MsoNormal">Hosted engine is on a different data center.<o:p></o=
:p></p>
<p class=3D"MsoNormal">Everything is nicely working.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The <a href=3D"http://www.ovirt.org/documentation/ad=
min-guide/chap-Data_Centers/">
administration guide</a> reads that:<o:p></o:p></p>
<p class=3D"MsoNormal">“An active host is required to remove a data c=
enter. Removing a data center will not remove the associated resources.R=
21;<o:p></o:p></p>
<p class=3D"MsoNormal">Actually, I *<b>want</b>* to remove all associated r=
esources and forget about them.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">So… is it correct to:<o:p></o:p></p>
<ol style=3D"margin-top:0in" start=3D"1" type=3D"1">
<li class=3D"MsoListParagraph" style=3D"margin-left:0in;mso-list:l0 level1 =
lfo1">put the storage in maintenance,
<o:p></o:p></li><li class=3D"MsoListParagraph" style=3D"margin-left:0in;mso=
-list:l0 level1 lfo1">then put datacenter on maintenance<o:p></o:p></li><li=
 class=3D"MsoListParagraph" style=3D"margin-left:0in;mso-list:l0 level1 lfo=
1">finally remove the datacenter<o:p></o:p></li></ol>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">will this cleanup also related storage and server? Y=
es, no? other suggested actions?<o:p></o:p></p>
<p class=3D"MsoNormal">Thank you<o:p></o:p></p>
<p class=3D"MsoNormal" style=3D"mso-margin-top-alt:24.0pt;margin-right:0in;=
margin-bottom:12.0pt;margin-left:0in">
<b><span style=3D"font-size:8.0pt;font-family:"Arial",sans-serif;=
color:#001437">Andrea Ghelardi<o:p></o:p></span></b></p>
<p class=3D"MsoNormal" style=3D"margin-top:6.0pt"><a href=3D"https://www.io=
ngroup.com/"><span style=3D"font-size:10.0pt;color:#001437;text-decoration:=
none"><img border=3D"0" width=3D"120" height=3D"72" style=3D"width:1.25in;h=
eight:.75in" id=3D"Picture_x0020_6" src=3D"cid:image001.jpg@01D2C8F1.CD0FBD=
40"></span></a><span style=3D"font-size:10.0pt;color:#001437"><o:p></o:p></=
span></p>
<p class=3D"MsoNormal" style=3D"margin-top:12.0pt"><span lang=3D"ES" style=
=3D"font-size:10.0pt;color:#001437">Via Ceci, 52<br>
56125 Pisa<br>
Italy<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"margin-top:12.0pt;line-height:120%"><span l=
ang=3D"ES" style=3D"font-size:10.0pt;line-height:120%;color:#001437">T: =
3;39 050 22037 1<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"line-height:120%"><span lang=3D"ES" style=
=3D"font-size:10.0pt;line-height:120%;color:#001437">D: +39 050 2203 89=
0<o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"line-height:120%"><span style=3D"font-size:=
10.0pt;line-height:120%"><a href=3D"mailto:andrea.ghelardi@iongroup.com"><s=
pan lang=3D"ES" style=3D"color:#0563C1">andrea.ghelardi(a)iongroup.com</span>=
</a></span><span lang=3D"ES" style=3D"font-size:10.0pt;line-height:120%;col=
or:#001437"><o:p></o:p></span></p>
<p class=3D"MsoNormal" style=3D"line-height:120%"><span style=3D"font-size:=
10.0pt;line-height:120%"><a href=3D"https://iongroup.com/"><span lang=3D"ES=
" style=3D"color:#0563C1">iongroup.com</span></a></span><u><span lang=3D"ES=
" style=3D"color:#0563C1"><o:p></o:p></span></u></p>
<p class=3D"MsoNormal" style=3D"mso-margin-top-alt:24.0pt;margin-right:0in;=
margin-bottom:12.0pt;margin-left:0in">
<span style=3D"font-size:10.0pt;color:#AFAFB4">****************************=
*******************<br>
<i>This email and any attachments may contain information which is confiden=
tial and/or privileged.  The information is intended exclusively for t=
he addressee and the views expressed may not be official policy, but the pe=
rsonal views of the originator.  If you
 are not the intended recipient, be aware that any disclosure, copying, dis=
tribution or use of the contents is prohibited. If you have received this e=
mail and any file transmitted with it in error, please notify the sender by=
 telephone or return email immediately
 and delete the material from your computer.  Internet communications =
are not secure and ION is not responsible for their abuse by third parties,=
 nor for any alteration or corruption in transmission, nor for any damage o=
r loss caused by any virus or other defect.
 ION accepts no liability or responsibility arising out of or in any way co=
nnected to this email.</i><br>
</span><span style=3D"font-size:4.0pt;color:#AFAFB4"><br>
</span><span style=3D"font-size:10.0pt;color:#AFAFB4">*********************=
**************************</span><span style=3D"color:#AFAFB4"><o:p></o:p><=
/span></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>
--_000_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_--
--_004_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_
Content-Type: image/jpeg; name="image001.jpg"
Content-Description: image001.jpg
Content-Disposition: inline; filename="image001.jpg"; size=2225;
	creation-date="Tue, 09 May 2017 16:28:56 GMT";
	modification-date="Tue, 09 May 2017 16:28:56 GMT"
Content-ID: <image001.jpg(a)01D2C8F1.CD0FBD40>
Content-Transfer-Encoding: base64
/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAoHBwgHBgoICAgLCgoLDhgQDg0NDh0VFhEYIx8lJCIf
IiEmKzcvJik0KSEiMEExNDk7Pj4+JS5ESUM8SDc9Pjv/2wBDAQoLCw4NDhwQEBw7KCIoOzs7Ozs7
Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozs7Ozv/wAARCABIAHgDASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwC3qWo3
1xqk0080qzLIQAGI8vB6D0xXpWgXNxd6HaT3WTK8fzE/xeh/EUtzoWl3dz9puLGKSXuxHX6+tXwA
oCqAABgAdq7sTiYVYRjGNrHl4LBVcPVlOc7p/wBXYtFFFcJ6gUUUUAFFFFABRRRQAUUUUAFFV7+6
+xWFxdbd3kxs+31wK80j8b64l4Ll7hXTOTDsATHp6/jWc6kYbnZh8HUxCbh0PU6KZDIJoUlAIDqG
APuKK0OPYfXmnjr4sSeGtabSNMsYrmaEAzyTMQoJGdoA74716XXi3xU8JWtx4la/sb1FurhVNxbu
pwpAwGDD1AHH41UXBO89io06lR8tNXZ6L4H8YQeMtFN6kJt54X8ueHOQrYzkHuCDXSVx3wx0Kx0P
wxttLv7VLcSF7iTbtw+ANuOwA/PrU/xI8QzeG/Bt1dWzbLqYiCFh/CzfxfgAT9aNJS93YlqUNJbo
zfF/xX0nw1cvYWkR1G+j4dUbbHGfRm9fYVxqfHbWBNmTRrJov7qyOG/Pn+VeZRxy3E6xxq0ssrBV
Ucs7E/qSa7K/+EvinT9GbUpIbeTy03yW8cm6RB34xg/ga6fZwjozHmk9j13wd8RNH8YZgh3Wt8q7
mtZSMkdyp/iH6+1dZXyPZXlxp95De2cpiuIHDxup5BFfU/h7Vl13w/Y6og2/aoVkK+hxyPzzWNSH
Lqi4Suc18QviGvgxbe2trVbq+uVLqrsQiIDjJxyee3saZ8PPiKPGTXFndWqWt9AokxGxKSJnGRnk
EHHHvWb8XvDdnq0Nnei8EGoRKY44ypIlTOcHHTB7+9eQ/wBj6tZalDFb7lnc5ilhcrjHfPbFOCpz
9xP3i5wqQh7Vr3e/Q+nNX1S30XSLrU7okQ2sZkfHU47D3PSvK9O+Oc0usIl/pMUOnyOFLRyEyRg9
z2P6Vuvr8esfDy70vWJA1+tiwnc8LJgcsD645/CvE9Ot7SbV4YZZ8wmQAEqR5nPA9s1Xs+RtVFqj
OnNVoqVKSabtufVrqk0TIwDxuuCD0INczH8P9JjvROXnaINuEDMNv0zjOK6dBhFG3bgdB2p1crjG
W6OqnWqUrqDtcQDAwOAKKWiqMQryHx1eWdl4oujPcLHvK8Oec7R29K9er56+Lmm6jbeOLq8uona3
uVQ28u07SoUDbn1BB4qlShVvGRpSxNXDTU6fzv1R6n8Nmik0e5kifeHmBypypGBjFZ/xqs5bnwQs
8aki1ukkfHZSCufzYU34L6dqNh4UuHvY3hiubgyW6OMHbtALY9CR+ld5fWVvqVjPZXcYlgnQxyIe
4NO0acrR2RE5zqtzqPVnyxoWp/2Nr1hqfl+aLSdZSn94A8ivddR+LfhWHRHurW8NzctGfLtfLYPu
xwGyMAevNeW+L/hprXhq6kktreW/04kmOeJdzKPRwOQffoa5BYZXkEaRSM5OAqqSfyrpcYz1OdNx
0GE9SfrX058P7Oaw8B6PbzrtkFuGIPbcSw/Q15X4C+FeoanfQ6jr1s9pp8bBxBKMPOewI7L65617
qAAAAMAdAKyrST0RcE1qee/Ebb/atn97d5Bz6Y3f/rrC0XSJNYkuI7cZuIYt8angMMgEZ9a6T4i2
l089pdrGzW6IUJUZ2tnPP1/pTfh1aXS3l1dmNltzGEDEYDNnPH0rz4SlTxKkuh9DVhCrlTpyejVv
xPNvFupx2MNzpDBheZ2Spj/V9Ccn6fzrnLDQNQup4S9tJFCzAtIwxhc9vevdvF/w60zXr063Gkia
jEobahG2cr0DA9+2RXJWtld3N8ltDC5nLgbSv3Tnv6V9FF/W4ubdmj4OUo5dOFOMHJN3fn5I9fjA
EagZxgYz1p1IMgDPJxzS14x74UUUUAFNZFcYZQw9CM06igAooooAKYI4wxcIoY9SBzT6Q9KAFor5
j1rxn4jvtenvpNUu7eVJWEcccpRYQDwoA44/WvoHwbql1rXhDTNRvQBcTwBpMDG49M498Z/GtJ03
FXJUrm0QCMEZFAAAwBgUtFZlBSYAJOBk96WigAooooAKKKKACiiigAooooAKKKKAOX1L4ceFNV1V
tSu9LVp3bdJtdlWQ+rKDg/1rpYoo4YkiiRUjRQqqowFA6ACiim22Fh9FFFIAooooAKKKKACiiigD
/9k=
--_004_CY4PR14MB1687E5A75084B228A054D63C89EF0CY4PR14MB1687namp_--
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        We have three gluster shares (_data, _engine, _export) created by a 
brick located on three of our VM hosts. See output from "gluster volume 
info" below:
  Volume Name: data
Type: Replicate
Volume ID: c07fdf43-b838-4e4b-bb26-61dbf406cb57
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick2/data
Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick2/data
Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick2/data (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: engine
Type: Distributed-Replicate
Volume ID: 25455f13-75ba-4bc6-926a-d06ee7c5859a
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick1/engine
Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick1/engine
Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick1/engine (arbiter)
Brick4: vmhost04-chi:/mnt/engine
Brick5: vmhost05-chi:/mnt/engine
Brick6: vmhost06-chi:/mnt/engine (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Volume Name: export
Type: Replicate
Volume ID: a4c3a49a-fa83-4a62-9523-989c8e016c35
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: vmhost01-chi.empoweredbenefits.com:/gluster/brick3/export
Brick2: vmhost02-chi.empoweredbenefits.com:/gluster/brick3/export
Brick3: vmhost03-chi.empoweredbenefits.com:/gluster/brick3/export (arbiter)
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: off
cluster.quorum-type: auto
cluster.server-quorum-type: server
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
features.shard-block-size: 512MB
performance.low-prio-threads: 32
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6
network.ping-timeout: 30
user.cifs: off
nfs.disable: on
performance.strict-o-direct: on
Our issue is that we ran out of space on our gluster-engine bricks which 
caused our Hosted Engine vm to crash. We added additional bricks from 
new VM Hosts (see vmhost05 to vmhost06 above) but we still are unable to 
restart our Hosted Engine due to the first three space being depleted. 
My understanding is that I need to extend the bricks that are 100% full 
on our engine partition. Is it the best practice to stop the glusterd 
service or can I use "gloster volume stop engine" to only stop the 
volume I need to extend? Also, if I need to stop glusterd will my VMs 
hosted on my ovirt cluster be affected by mount points engine, export, 
and data being off line?
Thanks,
Ryan
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
I have recently set up a ovirt 4.1 environment to test vm sso.
spec:
* host: centos 7.3.1611
* ovirt-engine: commit af393b7d3a494917dbda33a06813e8e8a8c6698a from
branch ovirt-engine-4.1 , self compiled.
* vdsm: vdsm-4.19.10.1-1.el7.centos.x86_64
* windows 2008 r2 with active directory setup(domain name is "ply.local",
test user is "ply(a)ply.local")
* windows 7 vm with guest tools setup
using ovirt-guest-tools-iso-4.1-3.fc24.noarch
I can add AD to ovirt engine successfully
using ovirt-engine-extension-aaa-ldap-setup tool.[1]
After adding AD domain to windows7 vm, I can login manually using AD user
with no problem.
I can see the logs[2] when I login in to userportal with AD user, and spice
client pop up automatically.
But the spice client just stops at the windows7 login screen. asking for
password.
In the vm, vdagent and vdservice are all running fine. I can provide guest
agent logs if needed.
So, anyone can point me to the right direction?
cheers
[1]: see attachment:
ovirt-engine-extension-aaa-ldap-setup-20170507034924-w5fwc9.log
[2]: see attachment: vdsm-log,ovirt-engine-log
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            5
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hello
I have a problem with oVirt Hosted Engine Setup version: 4.0.5.5-1.el7.centos.
Setup is using FCP SAN for data and engine.
Cluster has worked fine for a while. It has two hosts with VMs running.
I extended storage with an additional LUN recently. This LUN seems to
be gone from data domain and one VM is paused which I assume has data
on that device.
Got these errors in events:
Apr 24, 2017 10:26:05 AM
Failed to activate Storage Domain SD (Data Center DC) by admin@internal-authz
Apr 10, 2017 3:38:08 PM
Status of host cl01 was set to Up.
Apr 10, 2017 3:38:03 PM
Host cl01 does not enforce SELinux. Current status: DISABLED
Apr 10, 2017 3:37:58 PM
Host cl01 is initializing. Message: Recovering from crash or Initializing
Apr 10, 2017 3:37:58 PM
VDSM cl01 command failed: Recovering from crash or Initializing
Apr 10, 2017 3:37:46 PM
Failed to Reconstruct Master Domain for Data Center DC.
Apr 10, 2017 3:37:46 PM
Host cl01 is not responding. Host cannot be fenced automatically
because power management for the host is disabled.
Apr 10, 2017 3:37:46 PM
VDSM cl01 command failed: Broken pipe
Apr 10, 2017 3:37:46 PM
VDSM cl01 command failed: Broken pipe
Apr 10, 2017 3:32:45 PM
Invalid status on Data Center DC. Setting Data Center status to Non
Responsive (On host cl01, Error: General Exception).
Apr 10, 2017 3:32:45 PM
VDSM cl01 command failed: [Errno 19] Could not find dm device named `[unknown]`
Apr 7, 2017 1:28:04 PM
VM HostedEngine is down with error. Exit message: resource busy:
Failed to acquire lock: error -243.
Apr 7, 2017 1:28:02 PM
Storage Pool Manager runs on Host cl01 (Address: cl01).
Apr 7, 2017 1:27:59 PM
Invalid status on Data Center DC. Setting status to Non Responsive.
Apr 7, 2017 1:27:53 PM
Host cl02 does not enforce SELinux. Current status: DISABLED
Apr 7, 2017 1:27:52 PM
Host cl01 does not enforce SELinux. Current status: DISABLED
Apr 7, 2017 1:27:49 PM
Affinity Rules Enforcement Manager started.
Apr 7, 2017 1:27:34 PM
ETL Service Started
Apr 7, 2017 1:26:01 PM
ETL Service Stopped
Apr 3, 2017 1:22:54 PM
Shutdown of VM HostedEngine failed.
Apr 3, 2017 1:22:52 PM
Storage Pool Manager runs on Host cl01 (Address: cl01).
Apr 3, 2017 1:22:49 PM
Invalid status on Data Center DC. Setting status to Non Responsive.
Master data domain is inactive.
vdsm.log:
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,796::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['ids']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,796::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c00000000000099e|[unknown]|'\'',
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=
0 }  backup {  retain_min = 50  retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/ids (cwd None)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,880::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
 visible (pvscan --cache).\n  Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n"; <rc> = 0
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,881::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['leases']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,881::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c00000000000099e|[unknown]|'\'',
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=
0 }  backup {  retain_min = 50  retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/leases (cwd None)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,973::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
 visible (pvscan --cache).\n  Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n"; <rc> = 0
jsonrpc.Executor/5::INFO::2017-04-20
07:01:26,973::lvm::1226::Storage.LVM::(activateLVs) Refreshing lvs:
vg=bd616961-6da7-4eb0-939e-330b0a3fea6e lvs=['metadata', 'leases',
'ids', 'inbox', 'outbox', 'master']
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:26,974::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset
--cpu-list 0-39 /usr/bin/sudo -n /usr/sbin/lvm lvchange --config '
devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_d
evices=1 write_cache_state=0 disable_after_error_count=3 filter = [
'\''a|/dev/mapper/360050768018182b6c00000000000099e|[unknown]|'\'',
'\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
wait_for_locks=1  use_lvmetad=
0 }  backup {  retain_min = 50  retain_days = 0 } ' --refresh
bd616961-6da7-4eb0-939e-330b0a3fea6e/metadata
bd616961-6da7-4eb0-939e-330b0a3fea6e/leases
bd616961-6da7-4eb0-939e-330b0a3fea6e/ids
bd616961-6da7-4eb0-939e-330b0a3fea6e/inbox b
d616961-6da7-4eb0-939e-330b0a3fea6e/outbox
bd616961-6da7-4eb0-939e-330b0a3fea6e/master (cwd None)
Reactor thread::INFO::2017-04-20
07:01:27,069::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from ::1:44692
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,070::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "
WARNING: Not using lvmetad because config setting use_lvmetad=0.\n
WARNING: To avoid corruption, rescan devices to make changes
 visible (pvscan --cache).\n  Couldn't find device with uuid
jDB9VW-bNqY-UIKc-XxXp-xnyK-ZTlt-7Cpa1U.\n"; <rc> = 0
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,070::sp::662::Storage.StoragePool::(_stopWatchingDomainsState)
Stop watching domains state
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,070::resourceManager::628::Storage.ResourceManager::(releaseResource)
Trying to release resource
'Storage.58493e81-01dc-01d8-0390-000000000032'
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::647::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.58493e81-01dc-01d8-0390-000000000032' (0
active users)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::653::Storage.ResourceManager::(releaseResource)
Resource 'Storage.58493e81-01dc-01d8-0390-000000000032' is free,
finding out if anyone is waiting for it.
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::661::Storage.ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.58493e81-01dc-01d8-0390-000000000032', Clearing records.
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::628::Storage.ResourceManager::(releaseResource)
Trying to release resource 'Storage.HsmDomainMonitorLock'
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::647::Storage.ResourceManager::(releaseResource)
Released resource 'Storage.HsmDomainMonitorLock' (0 active users)
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::653::Storage.ResourceManager::(releaseResource)
Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone
is waiting for it.
jsonrpc.Executor/5::DEBUG::2017-04-20
07:01:27,071::resourceManager::661::Storage.ResourceManager::(releaseResource)
No one is waiting for resource 'Storage.HsmDomainMonitorLock',
Clearing records.
jsonrpc.Executor/5::ERROR::2017-04-20
07:01:27,072::task::868::Storage.TaskManager.Task::(_setError)
Task=`15122a21-4fb7-45bf-9a9a-4b97f27bc1e1`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 875, in _run
    return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 50, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 988, in connectStoragePool
    spUUID, hostID, msdUUID, masterVersion, domainsMap)
  File "/usr/share/vdsm/storage/hsm.py", line 1053, in _connectStoragePool
    res = pool.connect(hostID, msdUUID, masterVersion)
  File "/usr/share/vdsm/storage/sp.py", line 646, in connect
    self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
  File "/usr/share/vdsm/storage/sp.py", line 1219, in __rebuild
    self.setMasterDomain(msdUUID, masterVersion)
  File "/usr/share/vdsm/storage/sp.py", line 1427, in setMasterDomain
    domain = sdCache.produce(msdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 101, in produce
    domain.getRealDomain()
  File "/usr/share/vdsm/storage/sdc.py", line 53, in getRealDomain
    return self._cache._realProduce(self._sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 125, in _realProduce
    domain = self._findDomain(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain
    dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/blockSD.py", line 1441, in findDomain
    return BlockStorageDomain(BlockStorageDomain.findDomainPath(sdUUID))
  File "/usr/share/vdsm/storage/blockSD.py", line 814, in __init__
    lvm.checkVGBlockSizes(sdUUID, (self.logBlkSize, self.phyBlkSize))
  File "/usr/share/vdsm/storage/lvm.py", line 1056, in checkVGBlockSizes
    _checkpvsblksize(pvs, vgBlkSize)
 File "/usr/share/vdsm/storage/lvm.py", line 1033, in _checkpvsblksize
    pvBlkSize = _getpvblksize(pv)
  File "/usr/share/vdsm/storage/lvm.py", line 1027, in _getpvblksize
    dev = devicemapper.getDmId(os.path.basename(pv))
  File "/usr/share/vdsm/storage/devicemapper.py", line 40, in getDmId
    deviceMultipathName)
OSError: [Errno 19] Could not find dm device named `[unknown]`
Any input how to diagnose or troubleshoot would be appreciated.
-- 
Best Regards
Jens Oechsler
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            5
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi there,
we have a big problem with our ovirt 4.1.1 enviroment.
After a fc-storage failure, and an automatic reboot of the host with the 
hosted engine on it, we can't get the engine running again.
The probelm seems an invalid lockspace. Sanlock.log shows:
2017-05-09 12:07:22+0200 35 [4991]: sanlock daemon started 3.4.0 host 
360206bc-58f8-41ab-8aa9-53185222f029.kvm04.serv
2017-05-09 12:08:46+0200 119 [4996]: s1 lockspace 
hosted-engine:1:/var/run/vdsm/storage/920f345e-95d3-4b44-93c7-9d9931299f57/c3a02f51-6d03-4baf-908e-0c240b665714/48cfe112-b7e0-44f7-859c-d6f6c2539831:0
2017-05-09 12:09:08+0200 141 [4991]: s1 host 1 2 119 
360206bc-58f8-41ab-8aa9-53185222f029.kvm04.serv
Messages shows:
May  9 11:38:43 kvm04 kernel: device-mapper: core: qemu-kvm: sending 
ioctl 5326 to DM device without required privilege.
May  9 11:38:44 kvm04 sanlock[4956]: 2017-05-09 11:38:44+0200 375 
[4961]: r2 cmd_acquire 2,9,12952 invalid lockspace found -1 failed 0 
name 920f345e-95d3-4b44-93c7-9d9931299f57
May  9 11:38:44 kvm04 journal: Erlangen einer Sperre fehlgeschlagen: Auf 
dem Gerät ist kein Speicherplatz mehr verfügbar
May  9 11:38:44 kvm04 journal: Ende der Datei beim Lesen von Daten: 
Eingabe-/Ausgabefehler
May  9 11:38:44 kvm04 journal: Ende der Datei beim Lesen von Daten: 
Eingabe-/Ausgabefehler
Manualy it is possible to mount the HE Image on the host. A 
filesystemcheck  reports that all is clean. Also it is possible to 
create a file in the mounted engine image.
If I remove the lockspace manuelly, it comes back within about 10 seconds.
Enviroment ovirt 4.1.1 latest onn image + HE Appliance
Also tested hosted-engine --reinitialize-lockspace completes without errors.
Anybody any ideas? Would be very very great....
Marco
                    
                  
                  
                          
                            
                            1
                            
                          
                          
                            
                            0
                            
                          
                          
                            
    
                          
                        
                    
                    
                        here are some pics:
https://i.imgur.com/hfJM6PG.png
machines show down although they are up
https://i.imgur.com/NkHisyr.png
Engine Log:
https://drive.google.com/file/d/0B4wHJ6nwLi9BcE5INXRJSVpPSHM/view?usp=shari…
VDSM Log:
https://drive.google.com/file/d/0B4wHJ6nwLi9BNFlvb1llNE5Yb2s/view?usp=shari…
Please help me get this back on thank you all
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            3
                            
                          
                          
                            
    
                          
                        
                    
                    
                        I'm playing with the python sdk and getting :
[2017-05-04 17:01:17] 192.168.205.36 "ovirt.XXX" "GET /ovirt-engine/api/storagedomains HTTP/1.1" 292250 404 + 188 "-" "PythonSDK/4.1.3"
And in engine.log:
2017-05-04 17:01:17,727+02 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-21) [] Operation Failed: Entity not found: Storage server connection: id=6860d96f-557e-4d82-a209-401d72bd6e16
But in the documentation from https://ovirt.prod.exalead.com/ovirt-engine/apidoc/#requests, I indeed see:
GET /storagedomains
The ovirt's version I use:
<product_info>
  <name>oVirt Engine</name>
  <vendor>ovirt.org</vendor>
  <version>
    <build>1</build>
    <full_version>4.1.1.8-1.el7.centos</full_version>
    <major>4</major>
    <minor>1</minor>
    <revision>0</revision>
  </version>
</product_info>
Is there something obvious I missed ?
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hello,
I am trying to test oVirt 4.1 functionality using vdsm-fake. I managed to
run vdsm-fake with the following docker  command :
docker run  -d -p 8080:8080 -p54321:54321 docker.io/rmohr/ovirt-vdsmfake
it is running on a different host than ovirt 4.1. so in the /etc/hosts file
instead of using 127.0.0.1 I added the vdsm-fake host's ip address.
I am adding test0,test1 ... looks good at the beginning but after a while
host is set to non-operational due to reason
EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL
ovirt-engine is complaining about available memory of host that it is below
the threshold 1024 MB , available looks 0.
here are the relevant logs from ovirt-engine and vdsm-fake. Any ideas ?
Thank you.
ENGINE LOG
2017-05-06 14:27:43,643+03 INFO
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-15)
[002e2037-0ed9-429e-acb9-6ea363c8f44d] Running command: AddVdsCommand
internal: false. Entities affected :  ID:
00000002-0002-0002-0002-00000000017a Type: ClusterAction group CREATE_HOST
with role type ADMIN
2017-05-06 14:27:43,666+03 INFO
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-15) [7790960e]
Before acquiring and wait lock
'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-000000000311=<REGISTER_VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-05-06 14:27:43,667+03 INFO
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-15) [7790960e]
Lock-wait acquired to object
'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-000000000311=<REGISTER_VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-05-06 14:27:43,672+03 INFO
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-15) [7790960e]
Running command: AddVdsSpmIdCommand internal: true. Entities affected :
ID: 88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:27:43,682+03 INFO
[org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (default task-15) [7790960e]
Lock freed to object
'EngineLock:{exclusiveLocks='[00000001-0001-0001-0001-000000000311=<REGISTER_VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-05-06 14:27:43,685+03 INFO
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (default task-15)
[7790960e] START, RemoveVdsVDSCommand(HostName = test3,
RemoveVdsVDSCommandParameters:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb'}), log id: 7fcf255f
2017-05-06 14:27:43,686+03 INFO
[org.ovirt.engine.core.vdsbroker.RemoveVdsVDSCommand] (default task-15)
[7790960e] FINISH, RemoveVdsVDSCommand, log id: 7fcf255f
2017-05-06 14:27:43,689+03 INFO
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-15)
[7790960e] START, AddVdsVDSCommand(HostName = test3,
AddVdsVDSCommandParameters:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb'}), log id: 48a00f2
2017-05-06 14:27:43,689+03 INFO
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-15)
[7790960e] AddVds - entered , starting logic to add VDS
'88cbdeab-37f4-47f3-a761-1cee75a62bcb'
2017-05-06 14:27:43,692+03 INFO
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-15)
[7790960e] AddVds - VDS '88cbdeab-37f4-47f3-a761-1cee75a62bcb' was added,
will try to add it to the resource manager
2017-05-06 14:27:43,692+03 INFO
[org.ovirt.engine.core.vdsbroker.VdsManager] (default task-15) [7790960e]
Entered VdsManager constructor
2017-05-06 14:27:43,696+03 INFO
[org.ovirt.engine.core.vdsbroker.VdsManager] (default task-15) [7790960e]
Initialize vdsBroker 'test3:54321'
2017-05-06 14:27:43,704+03 INFO
[org.ovirt.engine.core.vdsbroker.ResourceManager] (default task-15)
[7790960e] VDS '88cbdeab-37f4-47f3-a761-1cee75a62bcb' was added to the
Resource Manager
2017-05-06 14:27:43,704+03 INFO
[org.ovirt.engine.core.vdsbroker.AddVdsVDSCommand] (default task-15)
[7790960e] FINISH, AddVdsVDSCommand, log id: 48a00f2
2017-05-06 14:27:43,716+03 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-15) [7790960e] EVENT_ID:
VDS_ALERT_FENCE_IS_NOT_CONFIGURED(9,000), Correlation ID: null, Call Stack:
null, Custom Event ID: -1, Message: Failed to verify Power Management
configuration for Host test3.
2017-05-06 14:27:43,755+03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-15) [7790960e] EVENT_ID: USER_ADD_VDS(42), Correlation ID:
002e2037-0ed9-429e-acb9-6ea363c8f44d, Job ID:
404e989c-1ac6-43de-bc91-1e9548de9d49, Call Stack: null, Custom Event ID:
-1, Message: Host test3 was added by admin@internal-authz.
2017-05-06 14:27:46,748+03 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (Stomp Reactor)
[b859ba6] Connecting to test3/192.168.1.27
2017-05-06 14:27:56,818+03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler5) [507a0cb7] START,
GetHardwareInfoVDSCommand(HostName = test3,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb',
vds='Host[test3,88cbdeab-37f4-47f3-a761-1cee75a62bcb]'}), log id: 3db67a
2017-05-06 14:27:56,830+03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler5) [507a0cb7] FINISH, GetHardwareInfoVDSCommand, log
id: 3db67a
2017-05-06 14:27:56,893+03 INFO
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
(DefaultQuartzScheduler5) [1179ed4f] Running command:
HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected
:  ID: 88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:27:56,929+03 INFO
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
(DefaultQuartzScheduler5) [5cae97a7] Running command:
SetNonOperationalVdsCommand internal: true. Entities affected :  ID:
88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:27:56,936+03 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler5) [5cae97a7] START, SetVdsStatusVDSCommand(HostName
= test3, SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb', status='NonOperational',
nonOperationalReason='EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 769664ce
2017-05-06 14:27:56,940+03 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler5) [5cae97a7] FINISH, SetVdsStatusVDSCommand, log
id: 769664ce
2017-05-06 14:27:56,960+03 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [5cae97a7] EVENT_ID:
EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL(9,609), Correlation ID:
5cae97a7, Job ID: 892ef43d-db32-4092-81ae-06a1da6d5060, Call Stack: null,
Custom Event ID: -1, Message: Host test3 does not comply with the cluster
Default emulated machines. The current cluster compatibility level supports
[pc-i440fx-rhel7.3.0, pc-i440fx-2.6, pseries-rhel7.3.0] and the host
emulated machines are
pc-0.10,pc-0.11,pc-0.12,pc-0.13,pc-0.14,pc-0.15,pc-1.0,pc-1.0,pc-i440fx-2.1,pseries-rhel7.2.0,pc-i440fx-rhel7.2.0,rhel6.4.0,rhel6.5.0,rhel6.6.0,rhel6.7.0,rhel6.8.0,rhel6.9.0,rhel7.0.0,rhel7.2.0,rhel7.5.0,pc,isapc.
2017-05-06 14:27:57,001+03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler5) [5cae97a7] START,
GetHardwareInfoVDSCommand(HostName = test3,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb',
vds='Host[test3,88cbdeab-37f4-47f3-a761-1cee75a62bcb]'}), log id: 546bc127
2017-05-06 14:27:57,007+03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler5) [5cae97a7] FINISH, GetHardwareInfoVDSCommand, log
id: 546bc127
2017-05-06 14:27:57,044+03 INFO
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
(DefaultQuartzScheduler5) [445f8699] Running command:
HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected
:  ID: 88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:27:57,064+03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler5) [445f8699] EVENT_ID: VDS_DETECTED(13),
Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:
Status of host test3 was set to NonOperational.
2017-05-06 14:27:57,086+03 INFO
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
(DefaultQuartzScheduler5) [30399168] Running command:
HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected
:  ID: 88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:27:57,099+03 INFO
[org.ovirt.engine.core.bll.HandleVdsVersionCommand]
(DefaultQuartzScheduler5) [74b1e5e7] Running command:
HandleVdsVersionCommand internal: true. Entities affected :  ID:
88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:27:57,102+03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler5) [74b1e5e7] Host
'test3'(88cbdeab-37f4-47f3-a761-1cee75a62bcb) is already in NonOperational
status for reason 'EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL'.
SetNonOperationalVds command is skipped.
2017-05-06 14:27:58,714+03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler1) [] Fetched 0 VMs from VDS
'88cbdeab-37f4-47f3-a761-1cee75a62bcb'
2017-05-06 14:28:12,261+03 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler10) [] EVENT_ID: VDS_TIME_DRIFT_ALERT(604),
Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host
test3 has time-drift of 3599 seconds while maximum configured value is 300
seconds.
2017-05-06 14:28:12,281+03 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler10) [] EVENT_ID: VDS_LOW_MEM(531), Correlation ID:
null, Call Stack: null, Custom Event ID: -1, Message: Available memory of
host test3 [0 MB] is under defined threshold [1024 MB].
2017-05-06 14:30:00,021+03 INFO
[org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler6)
[7e9c9b85] Autorecovering 1 hosts
2017-05-06 14:30:00,022+03 INFO
[org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler6)
[7e9c9b85] Autorecovering hosts id: 88cbdeab-37f4-47f3-a761-1cee75a62bcb ,
name : test3
2017-05-06 14:30:00,033+03 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (DefaultQuartzScheduler6)
[64ba5626] Lock Acquired to object
'EngineLock:{exclusiveLocks='[88cbdeab-37f4-47f3-a761-1cee75a62bcb=<VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-05-06 14:30:00,042+03 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (DefaultQuartzScheduler6)
[64ba5626] Running command: ActivateVdsCommand internal: true. Entities
affected :  ID: 88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2017-05-06 14:30:00,042+03 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (DefaultQuartzScheduler6)
[64ba5626] Before acquiring lock in order to prevent monitoring for host
'test3' from data-center 'Default'
2017-05-06 14:30:00,042+03 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (DefaultQuartzScheduler6)
[64ba5626] Lock acquired, from now a monitoring of host will be skipped for
host 'test3' from data-center 'Default'
2017-05-06 14:30:00,048+03 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler6) [64ba5626] START, SetVdsStatusVDSCommand(HostName
= test3, SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb', status='Unassigned',
nonOperationalReason='NONE', stopSpmFailureLogged='false',
maintenanceReason='null'}), log id: 771398
2017-05-06 14:30:00,055+03 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler6) [64ba5626] FINISH, SetVdsStatusVDSCommand, log
id: 771398
2017-05-06 14:30:00,064+03 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (DefaultQuartzScheduler6)
[64ba5626] Activate host finished. Lock released. Monitoring can run now
for host 'test3' from data-center 'Default'
2017-05-06 14:30:00,073+03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler6) [64ba5626] EVENT_ID: VDS_ACTIVATE_ASYNC(9,502),
Correlation ID: 64ba5626, Call Stack: null, Custom Event ID: -1, Message:
Host test3 was autorecovered.
2017-05-06 14:30:00,075+03 INFO
[org.ovirt.engine.core.bll.ActivateVdsCommand] (DefaultQuartzScheduler6)
[64ba5626] Lock freed to object
'EngineLock:{exclusiveLocks='[88cbdeab-37f4-47f3-a761-1cee75a62bcb=<VDS,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-05-06 14:30:01,769+03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler4) [4f87a9b3] START,
GetHardwareInfoVDSCommand(HostName = test3,
VdsIdAndVdsVDSCommandParametersBase:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb',
vds='Host[test3,88cbdeab-37f4-47f3-a761-1cee75a62bcb]'}), log id: f56da49
2017-05-06 14:30:01,779+03 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler4) [4f87a9b3] FINISH, GetHardwareInfoVDSCommand, log
id: f56da49
2017-05-06 14:30:01,844+03 INFO
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
(DefaultQuartzScheduler4) [541ddee4] Running command:
HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected
:  ID: 88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:30:01,903+03 INFO
[org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
(DefaultQuartzScheduler4) [57a8f4c5] Running command:
SetNonOperationalVdsCommand internal: true. Entities affected :  ID:
88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:30:01,909+03 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler4) [57a8f4c5] START, SetVdsStatusVDSCommand(HostName
= test3, SetVdsStatusVDSCommandParameters:{runAsync='true',
hostId='88cbdeab-37f4-47f3-a761-1cee75a62bcb', status='NonOperational',
nonOperationalReason='EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2f95d0c2
2017-05-06 14:30:01,921+03 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(DefaultQuartzScheduler4) [57a8f4c5] FINISH, SetVdsStatusVDSCommand, log
id: 2f95d0c2
2017-05-06 14:30:02,024+03 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler4) [57a8f4c5] EVENT_ID:
EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL(9,609), Correlation ID:
57a8f4c5, Job ID: c0f8e1c5-0876-46f0-b9f3-a82cd5cbf25c, Call Stack: null,
Custom Event ID: -1, Message: Host test3 does not comply with the cluster
Default emulated machines. The current cluster compatibility level supports
[pc-i440fx-rhel7.3.0, pc-i440fx-2.6, pseries-rhel7.3.0] and the host
emulated machines are
pc-0.10,pc-0.11,pc-0.12,pc-0.13,pc-0.14,pc-0.15,pc-1.0,pc-1.0,pc-i440fx-2.1,pseries-rhel7.2.0,pc-i440fx-rhel7.2.0,rhel6.4.0,rhel6.5.0,rhel6.6.0,rhel6.7.0,rhel6.8.0,rhel6.9.0,rhel7.0.0,rhel7.2.0,rhel7.5.0,pc,isapc.
2017-05-06 14:30:02,067+03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler4) [57a8f4c5] EVENT_ID: VDS_DETECTED(13),
Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:
Status of host test3 was set to NonOperational.
2017-05-06 14:30:02,104+03 INFO
[org.ovirt.engine.core.bll.HandleVdsVersionCommand]
(DefaultQuartzScheduler4) [1b34fa25] Running command:
HandleVdsVersionCommand internal: true. Entities affected :  ID:
88cbdeab-37f4-47f3-a761-1cee75a62bcb Type: VDS
2017-05-06 14:30:02,109+03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(DefaultQuartzScheduler4) [1b34fa25] Host
'test3'(88cbdeab-37f4-47f3-a761-1cee75a62bcb) is already in NonOperational
status for reason 'EMULATED_MACHINES_INCOMPATIBLE_WITH_CLUSTER_LEVEL'.
SetNonOperationalVds command is skipped.
^C
VDSM-FAKE LOG
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,070 Message CONNECT
May  6 15:27:57 vdsm-fake journal: accept-version:1.2
May  6 15:27:57 vdsm-fake journal: heart-beat:0,24000
May  6 15:27:57 vdsm-fake journal: host:test3
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: SUBSCRIBE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.events
May  6 15:27:57 vdsm-fake journal: ack:auto
May  6 15:27:57 vdsm-fake journal: id:4504105f-66e8-4163-b515-9960ca956814
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: SUBSCRIBE
May  6 15:27:57 vdsm-fake journal: destination:jms.topic.vdsm_responses
May  6 15:27:57 vdsm-fake journal: ack:auto
May  6 15:27:57 vdsm-fake journal: id:d101be23-5fa8-4ce3-8fac-7949efdb5fbb
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,072 Message CONNECT
May  6 15:27:57 vdsm-fake journal: accept-version:1.2
May  6 15:27:57 vdsm-fake journal: heart-beat:0,24000
May  6 15:27:57 vdsm-fake journal: host:test3
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,072 Message CONNECTED
May  6 15:27:57 vdsm-fake journal: heart-beat:24000,0
May  6 15:27:57 vdsm-fake journal:
session:9ab26f9f-17a0-4d77-833f-f206a07ce7e5
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,072
StompCommonClient Message sent: CONNECTED
May  6 15:27:57 vdsm-fake journal: heart-beat:24000,0
May  6 15:27:57 vdsm-fake journal:
session:9ab26f9f-17a0-4d77-833f-f206a07ce7e5
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,072 Message SUBSCRIBE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.events
May  6 15:27:57 vdsm-fake journal: ack:auto
May  6 15:27:57 vdsm-fake journal: id:4504105f-66e8-4163-b515-9960ca956814
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,073 Message ACK
May  6 15:27:57 vdsm-fake journal: id:4504105f-66e8-4163-b515-9960ca956814
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,073
StompCommonClient Message sent: ACK
May  6 15:27:57 vdsm-fake journal: id:4504105f-66e8-4163-b515-9960ca956814
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,073 Message SUBSCRIBE
May  6 15:27:57 vdsm-fake journal: destination:jms.topic.vdsm_responses
May  6 15:27:57 vdsm-fake journal: ack:auto
May  6 15:27:57 vdsm-fake journal: id:d101be23-5fa8-4ce3-8fac-7949efdb5fbb
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,073 Message ACK
May  6 15:27:57 vdsm-fake journal: id:d101be23-5fa8-4ce3-8fac-7949efdb5fbb
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,073
StompCommonClient Message sent: ACK
May  6 15:27:57 vdsm-fake journal: id:d101be23-5fa8-4ce3-8fac-7949efdb5fbb
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,076 Message SEND
May  6 15:27:57 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:27:57 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:27:57 vdsm-fake journal: content-length:105
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getCapabilities","params":{},"id":"ef5c6197-ec90-4459-a284-49001b18c2db"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,077
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,089
JsonRpcServer$MessageHandler Request is Host.getCapabilities got response
{"jsonrpc":"2.0","result":{"HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.example:ef52ec17bb0"}],"FC":[]},"vlans":{},"lastClientIface":"ovirtmgmt","cpuSpeed":"1200.000","autoNumaBalancing":"1","cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","reservedMem":"321","numaNodes":{"0":{"totalMemory":3988,"cpus":[1,3,5,7,9,11,13,15]},"1":{"totalMemory":3988,"cpus":[0,2,4,6,8,10,12,14]}},"selinux":{"mode":"1"},"packages2":{"qemu-kvm":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"libvirt":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"spice-server":{"release":"5.fc17","buildtime":"1336983054","version":"0.10.1"},"qemu-img":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"kernel":{"release":"5.fc17.x86_64","buildtime":"1357699251.0","version":"3.6.11"},"mom":{"release":"1.fc17","buildtime":"1354824066","version":"0.3.0"},"vdsm":{"release":"0.141.gita11e8f2.fc17","buildtime":"1359653302","version":"4.10.3"}},"networks":{"ovirtmgmt":{"iface":"ovirtmgmt","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"netmask":"255.255.252.0","bridged":true,"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"10.34.63.254","mtu":"1500","switch":"legacy"}},"uuid":"840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9","operatingSystem":{"release":"1","name":"Fedora","version":"17"},"management_ip":"","nics":{"em1":{"hwaddr":"53:5F:BD:FB:5F:78","cfg":{"BOOTPROTO":"dhcp","HWADDR":"53:5F:BD:FB:5F:78","DEVICE":"em1","ONBOOT":"yes","BRIDGE":"ovirtmgmt","UUID":"eb19ec8d-1ab7-455e-934e-097a6b198ecf","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes","NAME":"Boot
Disk"},"netmask":"","addr":"","speed":1000,"mtu":"1500"},"em2":{"hwaddr":"DB:F2:C8:76:DE:81","cfg":{"BOOTPROTO":"dhcp","HWADDR":"DB:F2:C8:76:DE:81","DEVICE":"em2","ONBOOT":"no","BRIDGE":"ovirtmgmt","UUID":"afd4d997-3e24-4e64-92cc-6306a8427d77","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes"},"netmask":"","addr":"","speed":1000,"mtu":"1500"}},"kvmEnabled":"true","lastClient":"10.36.6.76","software_version":"4.10","cpuThreads":"4","hooks":{},"numaNodeDistance":{"0":[10,20],"1":[20,10]},"netConfigDirty":"False","guestOverhead":"65","ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","rngSources":["RANDOM"],"bridges":{"ovirtmgmt":{"netmask":"255.255.252.0","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"96.127.51.142","mtu":"1500"}},"kdumpStatus":"1","cpuSockets":"1","supportedProtocols":["2.2","2.3"],"emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"software_revision":"0.141","version_name":"Snow
Man","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"vmTypes":["kvm"],"cpuCores":"4","bondings":{"bond0":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond2":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond1":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond4":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond3":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","memSize":"7976","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"]},"id":"ef5c6197-ec90-4459-a284-49001b18c2db"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,090 Message MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:4222
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.example:ef52ec17bb0"}],"FC":[]},"vlans":{},"lastClientIface":"ovirtmgmt","cpuSpeed":"1200.000","autoNumaBalancing":"1","cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","reservedMem":"321","numaNodes":{"0":{"totalMemory":3988,"cpus":[1,3,5,7,9,11,13,15]},"1":{"totalMemory":3988,"cpus":[0,2,4,6,8,10,12,14]}},"selinux":{"mode":"1"},"packages2":{"qemu-kvm":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"libvirt":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"spice-server":{"release":"5.fc17","buildtime":"1336983054","version":"0.10.1"},"qemu-img":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"kernel":{"release":"5.fc17.x86_64","buildtime":"1357699251.0","version":"3.6.11"},"mom":{"release":"1.fc17","buildtime":"1354824066","version":"0.3.0"},"vdsm":{"release":"0.141.gita11e8f2.fc17","buildtime":"1359653302","version":"4.10.3"}},"networks":{"ovirtmgmt":{"iface":"ovirtmgmt","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"netmask":"255.255.252.0","bridged":true,"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"10.34.63.254","mtu":"1500","switch":"legacy"}},"uuid":"840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9","operatingSystem":{"release":"1","name":"Fedora","version":"17"},"management_ip":"","nics":{"em1":{"hwaddr":"53:5F:BD:FB:5F:78","cfg":{"BOOTPROTO":"dhcp","HWADDR":"53:5F:BD:FB:5F:78","DEVICE":"em1","ONBOOT":"yes","BRIDGE":"ovirtmgmt","UUID":"eb19ec8d-1ab7-455e-934e-097a6b198ecf","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes","NAME":"Boot
Disk"},"netmask":"","addr":"","speed":1000,"mtu":"1500"},"em2":{"hwaddr":"DB:F2:C8:76:DE:81","cfg":{"BOOTPROTO":"dhcp","HWADDR":"DB:F2:C8:76:DE:81","DEVICE":"em2","ONBOOT":"no","BRIDGE":"ovirtmgmt","UUID":"afd4d997-3e24-4e64-92cc-6306a8427d77","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes"},"netmask":"","addr":"","speed":1000,"mtu":"1500"}},"kvmEnabled":"true","lastClient":"10.36.6.76","software_version":"4.10","cpuThreads":"4","hooks":{},"numaNodeDistance":{"0":[10,20],"1":[20,10]},"netConfigDirty":"False","guestOverhead":"65","ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","rngSources":["RANDOM"],"bridges":{"ovirtmgmt":{"netmask":"255.255.252.0","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"96.127.51.142","mtu":"1500"}},"kdumpStatus":"1","cpuSockets":"1","supportedProtocols":["2.2","2.3"],"emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"software_revision":"0.141","version_name":"Snow
Man","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"vmTypes":["kvm"],"cpuCores":"4","bondings":{"bond0":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond2":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond1":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond4":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond3":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","memSize":"7976","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"]},"id":"ef5c6197-ec90-4459-a284-49001b18c2db"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,091
StompCommonClient Message sent: MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:4222
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: <JsonRpcResponse id:
"ef5c6197-ec90-4459-a284-49001b18c2db" result:
{HBAInventory={iSCSI=[{InitiatorName=iqn.1994-05.com.example:ef52ec17bb0}],
FC=[]}, vlans={}, lastClientIface=ovirtmgmt, cpuSpeed=1200.000,
autoNumaBalancing=1, cpuModel=Intel(R) Xeon(R) CPU E5606 @ 2.13GHz,
reservedMem=321, numaNodes={0={totalMemory=3988, cpus=[1, 3, 5, 7, 9, 11,
13, 15]}, 1={totalMemory=3988, cpus=[0, 2, 4, 6, 8, 10, 12, 14]}},
selinux={mode=1}, packages2={qemu-kvm={release=2.fc17,
buildtime=1349642820, version=1.0.1}, libvirt={release=2.fc17,
buildtime=1349642820, version=1.0.1}, spice-server={release=5.fc17,
buildtime=1336983054, version=0.10.1}, qemu-img={release=2.fc17,
buildtime=1349642820, version=1.0.1}, kernel={release=5.fc17.x86_64,
buildtime=1357699251.0, version=3.6.11}, mom={release=1.fc17,
buildtime=1354824066, version=0.3.0}, vdsm={release=0.141.gita11e8f2.fc17,
buildtime=1359653302, version=4.10.3}},
networks={ovirtmgmt={iface=ovirtmgmt, cfg={BOOTPROTO=dhcp,
DEVICE=ovirtmgmt, ONBOOT=yes, DELAY=0, TYPE=Ethernet},
netmask=255.255.252.0, bridged=true, addr=146.11.117.90, ports=[em1],
stp=off, gateway=10.34.63.254, mtu=1500, switch=legacy}},
uuid=840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9,
operatingSystem={release=1, name=Fedora, version=17}, management_ip=,
nics={em1={hwaddr=53:5F:BD:FB:5F:78, cfg={BOOTPROTO=dhcp,
HWADDR=53:5F:BD:FB:5F:78, DEVICE=em1, ONBOOT=yes, BRIDGE=ovirtmgmt,
UUID=eb19ec8d-1ab7-455e-934e-097a6b198ecf, NETBOOT=yes, TYPE=Ethernet,
NM_CONTROLLED=yes, NAME=Boot Disk}, netmask=, addr=, speed=1000, mtu=1500},
em2={hwaddr=DB:F2:C8:76:DE:81, cfg={BOOTPROTO=dhcp,
HWADDR=DB:F2:C8:76:DE:81, DEVICE=em2, ONBOOT=no, BRIDGE=ovirtmgmt,
UUID=afd4d997-3e24-4e64-92cc-6306a8427d77, NETBOOT=yes, TYPE=Ethernet,
NM_CONTROLLED=yes}, netmask=, addr=, speed=1000, mtu=1500}},
kvmEnabled=true, lastClient=10.36.6.76, software_version=4.10,
cpuThreads=4, hooks={}, numaNodeDistance={0=[10, 20], 1=[20, 10]},
netConfigDirty=False, guestOverhead=65,
ISCSIInitiatorName=iqn.1994-05.com.example:ef52ec17bb0,
rngSources=[RANDOM], bridges={ovirtmgmt={netmask=255.255.252.0,
cfg={BOOTPROTO=dhcp, DEVICE=ovirtmgmt, ONBOOT=yes, DELAY=0, TYPE=Ethernet},
addr=146.11.117.90, ports=[em1], stp=off, gateway=96.127.51.142,
mtu=1500}}, kdumpStatus=1, cpuSockets=1, supportedProtocols=[2.2, 2.3],
emulatedMachines=[pc-0.10, pc-0.11, pc-0.12, pc-0.13, pc-0.14, pc-0.15,
pc-1.0, pc-1.0, pc-i440fx-2.1, pseries-rhel7.2.0, pc-i440fx-rhel7.2.0,
rhel6.4.0, rhel6.5.0, rhel6.6.0, rhel6.7.0, rhel6.8.0, rhel6.9.0,
rhel7.0.0, rhel7.2.0, rhel7.5.0, pc, isapc], onlineCpus=[1, 3, 5, 7, 9, 11,
13, 15, 0, 2, 4, 6, 8, 10, 12, 14], software_revision=0.141,
version_name=Snow Man, supportedENGINEs=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6,
4.0, 4.1], vmTypes=[kvm], cpuCores=4, bondings={bond0={slaves=[],
hwaddr=00:00:00:00:00:00, cfg={}, netmask=, addr=, mtu=150},
bond2={slaves=[], hwaddr=00:00:00:00:00:00, cfg={}, netmask=, addr=,
mtu=150}, bond1={slaves=[], hwaddr=00:00:00:00:00:00, cfg={}, netmask=,
addr=, mtu=150}, bond4={slaves=[], hwaddr=00:00:00:00:00:00, cfg={},
netmask=, addr=, mtu=150}, bond3={slaves=[], hwaddr=00:00:00:00:00:00,
cfg={}, netmask=, addr=, mtu=150}},
cpuFlags=fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge,
memSize=7976, clusterLevels=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 4.0, 4.1]}>
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,138 Message SEND
May  6 15:27:57 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:27:57 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:27:57 vdsm-fake journal: content-length:105
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getHardwareInfo","params":{},"id":"809d902a-a461-409f-9f4c-f58ccfadfe97"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,139
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,140
JsonRpcServer$MessageHandler Request is Host.getHardwareInfo got response
{"jsonrpc":"2.0","result":{"systemFamily":"","systemSerialNumber":"CZJ2320M6N","systemProductName":"ProLiant
DL160
G6","systemManufacturer":"HP","systemUUID":"840d548e-03d3-4425-bb69-fe6b5886929a","systemVersion":""},"id":"809d902a-a461-409f-9f4c-f58ccfadfe97"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,140 Message MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:261
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"systemFamily":"","systemSerialNumber":"CZJ2320M6N","systemProductName":"ProLiant
DL160
G6","systemManufacturer":"HP","systemUUID":"840d548e-03d3-4425-bb69-fe6b5886929a","systemVersion":""},"id":"809d902a-a461-409f-9f4c-f58ccfadfe97"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,140
StompCommonClient Message sent: MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:261
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: <JsonRpcResponse id:
"809d902a-a461-409f-9f4c-f58ccfadfe97" result: {systemFamily=,
systemSerialNumber=CZJ2320M6N, systemProductName=ProLiant DL160 G6,
systemManufacturer=HP, systemUUID=840d548e-03d3-4425-bb69-fe6b5886929a,
systemVersion=}>
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,281 Message SEND
May  6 15:27:57 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:27:57 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:27:57 vdsm-fake journal: content-length:105
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getCapabilities","params":{},"id":"c974c4bb-ad2f-4056-81a2-a4ee53bf81a3"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,282
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,283
JsonRpcServer$MessageHandler Request is Host.getCapabilities got response
{"jsonrpc":"2.0","result":{"HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.example:ef52ec17bb0"}],"FC":[]},"vlans":{},"lastClientIface":"ovirtmgmt","cpuSpeed":"1200.000","autoNumaBalancing":"1","cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","reservedMem":"321","numaNodes":{"0":{"totalMemory":3988,"cpus":[1,3,5,7,9,11,13,15]},"1":{"totalMemory":3988,"cpus":[0,2,4,6,8,10,12,14]}},"selinux":{"mode":"1"},"packages2":{"qemu-kvm":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"libvirt":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"spice-server":{"release":"5.fc17","buildtime":"1336983054","version":"0.10.1"},"qemu-img":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"kernel":{"release":"5.fc17.x86_64","buildtime":"1357699251.0","version":"3.6.11"},"mom":{"release":"1.fc17","buildtime":"1354824066","version":"0.3.0"},"vdsm":{"release":"0.141.gita11e8f2.fc17","buildtime":"1359653302","version":"4.10.3"}},"networks":{"ovirtmgmt":{"iface":"ovirtmgmt","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"netmask":"255.255.252.0","bridged":true,"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"10.34.63.254","mtu":"1500","switch":"legacy"}},"uuid":"840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9","operatingSystem":{"release":"1","name":"Fedora","version":"17"},"management_ip":"","nics":{"em1":{"hwaddr":"53:5F:BD:FB:5F:78","cfg":{"BOOTPROTO":"dhcp","HWADDR":"53:5F:BD:FB:5F:78","DEVICE":"em1","ONBOOT":"yes","BRIDGE":"ovirtmgmt","UUID":"eb19ec8d-1ab7-455e-934e-097a6b198ecf","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes","NAME":"Boot
Disk"},"netmask":"","addr":"","speed":1000,"mtu":"1500"},"em2":{"hwaddr":"DB:F2:C8:76:DE:81","cfg":{"BOOTPROTO":"dhcp","HWADDR":"DB:F2:C8:76:DE:81","DEVICE":"em2","ONBOOT":"no","BRIDGE":"ovirtmgmt","UUID":"afd4d997-3e24-4e64-92cc-6306a8427d77","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes"},"netmask":"","addr":"","speed":1000,"mtu":"1500"}},"kvmEnabled":"true","lastClient":"10.36.6.76","software_version":"4.10","cpuThreads":"4","hooks":{},"numaNodeDistance":{"0":[10,20],"1":[20,10]},"netConfigDirty":"False","guestOverhead":"65","ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","rngSources":["RANDOM"],"bridges":{"ovirtmgmt":{"netmask":"255.255.252.0","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"96.127.51.142","mtu":"1500"}},"kdumpStatus":"1","cpuSockets":"1","supportedProtocols":["2.2","2.3"],"emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"software_revision":"0.141","version_name":"Snow
Man","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"vmTypes":["kvm"],"cpuCores":"4","bondings":{"bond0":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond2":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond1":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond4":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond3":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","memSize":"7976","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"]},"id":"c974c4bb-ad2f-4056-81a2-a4ee53bf81a3"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,283 Message MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:4222
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.example:ef52ec17bb0"}],"FC":[]},"vlans":{},"lastClientIface":"ovirtmgmt","cpuSpeed":"1200.000","autoNumaBalancing":"1","cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","reservedMem":"321","numaNodes":{"0":{"totalMemory":3988,"cpus":[1,3,5,7,9,11,13,15]},"1":{"totalMemory":3988,"cpus":[0,2,4,6,8,10,12,14]}},"selinux":{"mode":"1"},"packages2":{"qemu-kvm":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"libvirt":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"spice-server":{"release":"5.fc17","buildtime":"1336983054","version":"0.10.1"},"qemu-img":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"kernel":{"release":"5.fc17.x86_64","buildtime":"1357699251.0","version":"3.6.11"},"mom":{"release":"1.fc17","buildtime":"1354824066","version":"0.3.0"},"vdsm":{"release":"0.141.gita11e8f2.fc17","buildtime":"1359653302","version":"4.10.3"}},"networks":{"ovirtmgmt":{"iface":"ovirtmgmt","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"netmask":"255.255.252.0","bridged":true,"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"10.34.63.254","mtu":"1500","switch":"legacy"}},"uuid":"840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9","operatingSystem":{"release":"1","name":"Fedora","version":"17"},"management_ip":"","nics":{"em1":{"hwaddr":"53:5F:BD:FB:5F:78","cfg":{"BOOTPROTO":"dhcp","HWADDR":"53:5F:BD:FB:5F:78","DEVICE":"em1","ONBOOT":"yes","BRIDGE":"ovirtmgmt","UUID":"eb19ec8d-1ab7-455e-934e-097a6b198ecf","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes","NAME":"Boot
Disk"},"netmask":"","addr":"","speed":1000,"mtu":"1500"},"em2":{"hwaddr":"DB:F2:C8:76:DE:81","cfg":{"BOOTPROTO":"dhcp","HWADDR":"DB:F2:C8:76:DE:81","DEVICE":"em2","ONBOOT":"no","BRIDGE":"ovirtmgmt","UUID":"afd4d997-3e24-4e64-92cc-6306a8427d77","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes"},"netmask":"","addr":"","speed":1000,"mtu":"1500"}},"kvmEnabled":"true","lastClient":"10.36.6.76","software_version":"4.10","cpuThreads":"4","hooks":{},"numaNodeDistance":{"0":[10,20],"1":[20,10]},"netConfigDirty":"False","guestOverhead":"65","ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","rngSources":["RANDOM"],"bridges":{"ovirtmgmt":{"netmask":"255.255.252.0","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"96.127.51.142","mtu":"1500"}},"kdumpStatus":"1","cpuSockets":"1","supportedProtocols":["2.2","2.3"],"emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"software_revision":"0.141","version_name":"Snow
Man","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"vmTypes":["kvm"],"cpuCores":"4","bondings":{"bond0":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond2":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond1":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond4":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond3":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","memSize":"7976","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"]},"id":"c974c4bb-ad2f-4056-81a2-a4ee53bf81a3"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,287
StompCommonClient Message sent: MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:4222
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: <JsonRpcResponse id:
"c974c4bb-ad2f-4056-81a2-a4ee53bf81a3" result:
{HBAInventory={iSCSI=[{InitiatorName=iqn.1994-05.com.example:ef52ec17bb0}],
FC=[]}, vlans={}, lastClientIface=ovirtmgmt, cpuSpeed=1200.000,
autoNumaBalancing=1, cpuModel=Intel(R) Xeon(R) CPU E5606 @ 2.13GHz,
reservedMem=321, numaNodes={0={totalMemory=3988, cpus=[1, 3, 5, 7, 9, 11,
13, 15]}, 1={totalMemory=3988, cpus=[0, 2, 4, 6, 8, 10, 12, 14]}},
selinux={mode=1}, packages2={qemu-kvm={release=2.fc17,
buildtime=1349642820, version=1.0.1}, libvirt={release=2.fc17,
buildtime=1349642820, version=1.0.1}, spice-server={release=5.fc17,
buildtime=1336983054, version=0.10.1}, qemu-img={release=2.fc17,
buildtime=1349642820, version=1.0.1}, kernel={release=5.fc17.x86_64,
buildtime=1357699251.0, version=3.6.11}, mom={release=1.fc17,
buildtime=1354824066, version=0.3.0}, vdsm={release=0.141.gita11e8f2.fc17,
buildtime=1359653302, version=4.10.3}},
networks={ovirtmgmt={iface=ovirtmgmt, cfg={BOOTPROTO=dhcp,
DEVICE=ovirtmgmt, ONBOOT=yes, DELAY=0, TYPE=Ethernet},
netmask=255.255.252.0, bridged=true, addr=146.11.117.90, ports=[em1],
stp=off, gateway=10.34.63.254, mtu=1500, switch=legacy}},
uuid=840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9,
operatingSystem={release=1, name=Fedora, version=17}, management_ip=,
nics={em1={hwaddr=53:5F:BD:FB:5F:78, cfg={BOOTPROTO=dhcp,
HWADDR=53:5F:BD:FB:5F:78, DEVICE=em1, ONBOOT=yes, BRIDGE=ovirtmgmt,
UUID=eb19ec8d-1ab7-455e-934e-097a6b198ecf, NETBOOT=yes, TYPE=Ethernet,
NM_CONTROLLED=yes, NAME=Boot Disk}, netmask=, addr=, speed=1000, mtu=1500},
em2={hwaddr=DB:F2:C8:76:DE:81, cfg={BOOTPROTO=dhcp,
HWADDR=DB:F2:C8:76:DE:81, DEVICE=em2, ONBOOT=no, BRIDGE=ovirtmgmt,
UUID=afd4d997-3e24-4e64-92cc-6306a8427d77, NETBOOT=yes, TYPE=Ethernet,
NM_CONTROLLED=yes}, netmask=, addr=, speed=1000, mtu=1500}},
kvmEnabled=true, lastClient=10.36.6.76, software_version=4.10,
cpuThreads=4, hooks={}, numaNodeDistance={0=[10, 20], 1=[20, 10]},
netConfigDirty=False, guestOverhead=65,
ISCSIInitiatorName=iqn.1994-05.com.example:ef52ec17bb0,
rngSources=[RANDOM], bridges={ovirtmgmt={netmask=255.255.252.0,
cfg={BOOTPROTO=dhcp, DEVICE=ovirtmgmt, ONBOOT=yes, DELAY=0, TYPE=Ethernet},
addr=146.11.117.90, ports=[em1], stp=off, gateway=96.127.51.142,
mtu=1500}}, kdumpStatus=1, cpuSockets=1, supportedProtocols=[2.2, 2.3],
emulatedMachines=[pc-0.10, pc-0.11, pc-0.12, pc-0.13, pc-0.14, pc-0.15,
pc-1.0, pc-1.0, pc-i440fx-2.1, pseries-rhel7.2.0, pc-i440fx-rhel7.2.0,
rhel6.4.0, rhel6.5.0, rhel6.6.0, rhel6.7.0, rhel6.8.0, rhel6.9.0,
rhel7.0.0, rhel7.2.0, rhel7.5.0, pc, isapc], onlineCpus=[1, 3, 5, 7, 9, 11,
13, 15, 0, 2, 4, 6, 8, 10, 12, 14], software_revision=0.141,
version_name=Snow Man, supportedENGINEs=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6,
4.0, 4.1], vmTypes=[kvm], cpuCores=4, bondings={bond0={slaves=[],
hwaddr=00:00:00:00:00:00, cfg={}, netmask=, addr=, mtu=150},
bond2={slaves=[], hwaddr=00:00:00:00:00:00, cfg={}, netmask=, addr=,
mtu=150}, bond1={slaves=[], hwaddr=00:00:00:00:00:00, cfg={}, netmask=,
addr=, mtu=150}, bond4={slaves=[], hwaddr=00:00:00:00:00:00, cfg={},
netmask=, addr=, mtu=150}, bond3={slaves=[], hwaddr=00:00:00:00:00:00,
cfg={}, netmask=, addr=, mtu=150}},
cpuFlags=fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge,
memSize=7976, clusterLevels=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 4.0, 4.1]}>
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,315 Message SEND
May  6 15:27:57 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:27:57 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:27:57 vdsm-fake journal: content-length:105
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getHardwareInfo","params":{},"id":"788f5039-0a75-4854-bdf0-ae7b8b59b253"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,316
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,316
JsonRpcServer$MessageHandler Request is Host.getHardwareInfo got response
{"jsonrpc":"2.0","result":{"systemFamily":"","systemSerialNumber":"CZJ2320M6N","systemProductName":"ProLiant
DL160
G6","systemManufacturer":"HP","systemUUID":"840d548e-03d3-4425-bb69-fe6b5886929a","systemVersion":""},"id":"788f5039-0a75-4854-bdf0-ae7b8b59b253"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,317 Message MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:261
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"systemFamily":"","systemSerialNumber":"CZJ2320M6N","systemProductName":"ProLiant
DL160
G6","systemManufacturer":"HP","systemUUID":"840d548e-03d3-4425-bb69-fe6b5886929a","systemVersion":""},"id":"788f5039-0a75-4854-bdf0-ae7b8b59b253"}
May  6 15:27:57 vdsm-fake journal: 2017-05-06 12:27:57,317
StompCommonClient Message sent: MESSAGE
May  6 15:27:57 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:57 vdsm-fake journal: content-length:261
May  6 15:27:57 vdsm-fake journal:
May  6 15:27:57 vdsm-fake journal: <JsonRpcResponse id:
"788f5039-0a75-4854-bdf0-ae7b8b59b253" result: {systemFamily=,
systemSerialNumber=CZJ2320M6N, systemProductName=ProLiant DL160 G6,
systemManufacturer=HP, systemUUID=840d548e-03d3-4425-bb69-fe6b5886929a,
systemVersion=}>
May  6 15:27:59 vdsm-fake journal: 2017-05-06 12:27:59,018 Message SEND
May  6 15:27:59 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:27:59 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:27:59 vdsm-fake journal: content-length:103
May  6 15:27:59 vdsm-fake journal:
May  6 15:27:59 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"352f24a8-7215-433d-b3cb-e68ab63cdf64"}
May  6 15:27:59 vdsm-fake journal: 2017-05-06 12:27:59,019
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:27:59 vdsm-fake journal: 2017-05-06 12:27:59,021
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"352f24a8-7215-433d-b3cb-e68ab63cdf64"}
May  6 15:27:59 vdsm-fake journal: 2017-05-06 12:27:59,021 Message MESSAGE
May  6 15:27:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:59 vdsm-fake journal: content-length:73
May  6 15:27:59 vdsm-fake journal:
May  6 15:27:59 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"352f24a8-7215-433d-b3cb-e68ab63cdf64"}
May  6 15:27:59 vdsm-fake journal: 2017-05-06 12:27:59,022
StompCommonClient Message sent: MESSAGE
May  6 15:27:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:27:59 vdsm-fake journal: content-length:73
May  6 15:27:59 vdsm-fake journal:
May  6 15:27:59 vdsm-fake journal: <JsonRpcResponse id:
"352f24a8-7215-433d-b3cb-e68ab63cdf64" result: []>
May  6 15:28:12 vdsm-fake journal: 2017-05-06 12:28:12,451 Message SEND
May  6 15:28:12 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:12 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:12 vdsm-fake journal: content-length:98
May  6 15:28:12 vdsm-fake journal:
May  6 15:28:12 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"5a9900a6-c5a6-4b73-8624-6390b3cb2154"}
May  6 15:28:12 vdsm-fake journal: 2017-05-06 12:28:12,452
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:12 vdsm-fake journal: 2017-05-06 12:28:12,555
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:12
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"12","name":"bond4","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"12","name":"bond3","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"10","name":"em2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"13","cpuUser":"10","memCommitted":0,"cpuSys":"18","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"90","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073692"},"id":"5a9900a6-c5a6-4b73-8624-6390b3cb2154"}
May  6 15:28:12 vdsm-fake journal: 2017-05-06 12:28:12,556 Message MESSAGE
May  6 15:28:12 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:12 vdsm-fake journal: content-length:2040
May  6 15:28:12 vdsm-fake journal:
May  6 15:28:12 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:12
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"12","name":"bond4","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"12","name":"bond3","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"10","name":"em2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"13","cpuUser":"10","memCommitted":0,"cpuSys":"18","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"90","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073692"},"id":"5a9900a6-c5a6-4b73-8624-6390b3cb2154"}
May  6 15:28:12 vdsm-fake journal: 2017-05-06 12:28:12,556
StompCommonClient Message sent: MESSAGE
May  6 15:28:12 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:12 vdsm-fake journal: content-length:2040
May  6 15:28:12 vdsm-fake journal:
May  6 15:28:12 vdsm-fake journal: <JsonRpcResponse id:
"5a9900a6-c5a6-4b73-8624-6390b3cb2154" result:
{dateTime=2017-05-06T12:28:12 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=11, name=bond0, state=up, txDropped=0,
rxRate=11, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=13,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=10, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=12,
name=bond4, state=up, txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=12, name=em1, state=up,
txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=12, name=bond3,
state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=10, name=em2, state=up,
txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=13, cpuUser=10,
memCommitted=0, cpuSys=18, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=90, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=8},
1={memFree=0, memPercent=8}}, txRate=, statsAge=0.43, memUsed=8,
vmActive=0, ksmCpu=0, elapsedTime=1494073692}>
May  6 15:28:14 vdsm-fake journal: 2017-05-06 12:28:14,031 Message SEND
May  6 15:28:14 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:14 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:14 vdsm-fake journal: content-length:103
May  6 15:28:14 vdsm-fake journal:
May  6 15:28:14 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"7cb1262e-c06d-4126-a80a-0f42ade17f1d"}
May  6 15:28:14 vdsm-fake journal: 2017-05-06 12:28:14,032
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:14 vdsm-fake journal: 2017-05-06 12:28:14,034
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"7cb1262e-c06d-4126-a80a-0f42ade17f1d"}
May  6 15:28:14 vdsm-fake journal: 2017-05-06 12:28:14,035 Message MESSAGE
May  6 15:28:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:14 vdsm-fake journal: content-length:73
May  6 15:28:14 vdsm-fake journal:
May  6 15:28:14 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"7cb1262e-c06d-4126-a80a-0f42ade17f1d"}
May  6 15:28:14 vdsm-fake journal: 2017-05-06 12:28:14,036
StompCommonClient Message sent: MESSAGE
May  6 15:28:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:14 vdsm-fake journal: content-length:73
May  6 15:28:14 vdsm-fake journal:
May  6 15:28:14 vdsm-fake journal: <JsonRpcResponse id:
"7cb1262e-c06d-4126-a80a-0f42ade17f1d" result: []>
May  6 15:28:27 vdsm-fake journal: 2017-05-06 12:28:27,641 Message SEND
May  6 15:28:27 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:27 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:27 vdsm-fake journal: content-length:98
May  6 15:28:27 vdsm-fake journal:
May  6 15:28:27 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"7579860d-76d0-4686-82b5-a7d4b424365f"}
May  6 15:28:27 vdsm-fake journal: 2017-05-06 12:28:27,642
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:27 vdsm-fake journal: 2017-05-06 12:28:27,744
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:27
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"11","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"10","cpuUser":"19","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"9"},"1":{"memFree":0,"memPercent":"9"}},"txRate":"","statsAge":"0.43","memUsed":"9","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073707"},"id":"7579860d-76d0-4686-82b5-a7d4b424365f"}
May  6 15:28:27 vdsm-fake journal: 2017-05-06 12:28:27,745 Message MESSAGE
May  6 15:28:27 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:27 vdsm-fake journal: content-length:2040
May  6 15:28:27 vdsm-fake journal:
May  6 15:28:27 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:27
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"11","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"10","cpuUser":"19","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"9"},"1":{"memFree":0,"memPercent":"9"}},"txRate":"","statsAge":"0.43","memUsed":"9","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073707"},"id":"7579860d-76d0-4686-82b5-a7d4b424365f"}
May  6 15:28:27 vdsm-fake journal: 2017-05-06 12:28:27,746
StompCommonClient Message sent: MESSAGE
May  6 15:28:27 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:27 vdsm-fake journal: content-length:2040
May  6 15:28:27 vdsm-fake journal:
May  6 15:28:27 vdsm-fake journal: <JsonRpcResponse id:
"7579860d-76d0-4686-82b5-a7d4b424365f" result:
{dateTime=2017-05-06T12:28:27 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=12, name=bond0, state=up, txDropped=0,
rxRate=10, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=11,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=11, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=11,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=11, name=em1, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=11, name=bond3,
state=up, txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=10, cpuUser=19,
memCommitted=0, cpuSys=13, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=81, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=9},
1={memFree=0, memPercent=9}}, txRate=, statsAge=0.43, memUsed=9,
vmActive=0, ksmCpu=0, elapsedTime=1494073707}>
May  6 15:28:29 vdsm-fake journal: 2017-05-06 12:28:29,047 Message SEND
May  6 15:28:29 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:29 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:29 vdsm-fake journal: content-length:103
May  6 15:28:29 vdsm-fake journal:
May  6 15:28:29 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"daa452fe-bc58-4a48-84a6-315d43bb8e28"}
May  6 15:28:29 vdsm-fake journal: 2017-05-06 12:28:29,048
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:29 vdsm-fake journal: 2017-05-06 12:28:29,050
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"daa452fe-bc58-4a48-84a6-315d43bb8e28"}
May  6 15:28:29 vdsm-fake journal: 2017-05-06 12:28:29,051 Message MESSAGE
May  6 15:28:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:29 vdsm-fake journal: content-length:73
May  6 15:28:29 vdsm-fake journal:
May  6 15:28:29 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"daa452fe-bc58-4a48-84a6-315d43bb8e28"}
May  6 15:28:29 vdsm-fake journal: 2017-05-06 12:28:29,052
StompCommonClient Message sent: MESSAGE
May  6 15:28:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:29 vdsm-fake journal: content-length:73
May  6 15:28:29 vdsm-fake journal:
May  6 15:28:29 vdsm-fake journal: <JsonRpcResponse id:
"daa452fe-bc58-4a48-84a6-315d43bb8e28" result: []>
May  6 15:28:42 vdsm-fake journal: 2017-05-06 12:28:42,853 Message SEND
May  6 15:28:42 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:42 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:42 vdsm-fake journal: content-length:98
May  6 15:28:42 vdsm-fake journal:
May  6 15:28:42 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"3a84b849-011a-4da2-b11f-6e96881a6dd2"}
May  6 15:28:42 vdsm-fake journal: 2017-05-06 12:28:42,855
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:42 vdsm-fake journal: 2017-05-06 12:28:42,958
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:42
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"14","name":"bond0","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"13","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"14","cpuUser":"19","memCommitted":0,"cpuSys":"14","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"7"},"1":{"memFree":0,"memPercent":"7"}},"txRate":"","statsAge":"0.43","memUsed":"7","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073722"},"id":"3a84b849-011a-4da2-b11f-6e96881a6dd2"}
May  6 15:28:42 vdsm-fake journal: 2017-05-06 12:28:42,959 Message MESSAGE
May  6 15:28:42 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:42 vdsm-fake journal: content-length:2040
May  6 15:28:42 vdsm-fake journal:
May  6 15:28:42 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:42
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"14","name":"bond0","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"13","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"14","cpuUser":"19","memCommitted":0,"cpuSys":"14","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"7"},"1":{"memFree":0,"memPercent":"7"}},"txRate":"","statsAge":"0.43","memUsed":"7","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073722"},"id":"3a84b849-011a-4da2-b11f-6e96881a6dd2"}
May  6 15:28:42 vdsm-fake journal: 2017-05-06 12:28:42,960
StompCommonClient Message sent: MESSAGE
May  6 15:28:42 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:42 vdsm-fake journal: content-length:2040
May  6 15:28:42 vdsm-fake journal:
May  6 15:28:42 vdsm-fake journal: <JsonRpcResponse id:
"3a84b849-011a-4da2-b11f-6e96881a6dd2" result:
{dateTime=2017-05-06T12:28:42 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=14, name=bond0, state=up, txDropped=0,
rxRate=10, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=11,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=12, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=11,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=13, name=em1, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=13, name=bond3,
state=up, txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=13, name=em2, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=14, cpuUser=19,
memCommitted=0, cpuSys=14, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=81, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=7},
1={memFree=0, memPercent=7}}, txRate=, statsAge=0.43, memUsed=7,
vmActive=0, ksmCpu=0, elapsedTime=1494073722}>
May  6 15:28:44 vdsm-fake journal: 2017-05-06 12:28:44,084 Message SEND
May  6 15:28:44 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:44 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:44 vdsm-fake journal: content-length:103
May  6 15:28:44 vdsm-fake journal:
May  6 15:28:44 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"7c3832a8-b663-4a38-8596-79cf70a87358"}
May  6 15:28:44 vdsm-fake journal: 2017-05-06 12:28:44,085
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:44 vdsm-fake journal: 2017-05-06 12:28:44,087
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"7c3832a8-b663-4a38-8596-79cf70a87358"}
May  6 15:28:44 vdsm-fake journal: 2017-05-06 12:28:44,089 Message MESSAGE
May  6 15:28:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:44 vdsm-fake journal: content-length:73
May  6 15:28:44 vdsm-fake journal:
May  6 15:28:44 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"7c3832a8-b663-4a38-8596-79cf70a87358"}
May  6 15:28:44 vdsm-fake journal: 2017-05-06 12:28:44,090
StompCommonClient Message sent: MESSAGE
May  6 15:28:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:44 vdsm-fake journal: content-length:73
May  6 15:28:44 vdsm-fake journal:
May  6 15:28:44 vdsm-fake journal: <JsonRpcResponse id:
"7c3832a8-b663-4a38-8596-79cf70a87358" result: []>
May  6 15:28:58 vdsm-fake journal: 2017-05-06 12:28:58,052 Message SEND
May  6 15:28:58 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:58 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:58 vdsm-fake journal: content-length:98
May  6 15:28:58 vdsm-fake journal:
May  6 15:28:58 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"c858ea49-eb42-47cd-8edc-6fc21f45f8e7"}
May  6 15:28:58 vdsm-fake journal: 2017-05-06 12:28:58,053
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:58 vdsm-fake journal: 2017-05-06 12:28:58,155
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:58
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"13","name":"bond2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"10","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"13","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"16","cpuUser":"13","memCommitted":0,"cpuSys":"16","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"87","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073738"},"id":"c858ea49-eb42-47cd-8edc-6fc21f45f8e7"}
May  6 15:28:58 vdsm-fake journal: 2017-05-06 12:28:58,155 Message MESSAGE
May  6 15:28:58 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:58 vdsm-fake journal: content-length:2040
May  6 15:28:58 vdsm-fake journal:
May  6 15:28:58 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:28:58
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"13","name":"bond2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"10","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"13","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"16","cpuUser":"13","memCommitted":0,"cpuSys":"16","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"87","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073738"},"id":"c858ea49-eb42-47cd-8edc-6fc21f45f8e7"}
May  6 15:28:58 vdsm-fake journal: 2017-05-06 12:28:58,156
StompCommonClient Message sent: MESSAGE
May  6 15:28:58 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:58 vdsm-fake journal: content-length:2040
May  6 15:28:58 vdsm-fake journal:
May  6 15:28:58 vdsm-fake journal: <JsonRpcResponse id:
"c858ea49-eb42-47cd-8edc-6fc21f45f8e7" result:
{dateTime=2017-05-06T12:28:58 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=13, name=bond0, state=up, txDropped=0,
rxRate=14, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=13, name=bond2, state=up, txDropped=0, rxRate=12,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=10, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=11,
name=bond4, state=up, txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=13, name=em1, state=up,
txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=10, name=bond3,
state=up, txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=13, name=em2, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=16, cpuUser=13,
memCommitted=0, cpuSys=16, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=87, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=5},
1={memFree=0, memPercent=5}}, txRate=, statsAge=0.43, memUsed=5,
vmActive=0, ksmCpu=0, elapsedTime=1494073738}>
May  6 15:28:59 vdsm-fake journal: 2017-05-06 12:28:59,111 Message SEND
May  6 15:28:59 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:28:59 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:28:59 vdsm-fake journal: content-length:103
May  6 15:28:59 vdsm-fake journal:
May  6 15:28:59 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"e3c9144f-05b6-42bf-ba23-cf18a0bb0f0d"}
May  6 15:28:59 vdsm-fake journal: 2017-05-06 12:28:59,113
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:28:59 vdsm-fake journal: 2017-05-06 12:28:59,115
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"e3c9144f-05b6-42bf-ba23-cf18a0bb0f0d"}
May  6 15:28:59 vdsm-fake journal: 2017-05-06 12:28:59,116 Message MESSAGE
May  6 15:28:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:59 vdsm-fake journal: content-length:73
May  6 15:28:59 vdsm-fake journal:
May  6 15:28:59 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"e3c9144f-05b6-42bf-ba23-cf18a0bb0f0d"}
May  6 15:28:59 vdsm-fake journal: 2017-05-06 12:28:59,117
StompCommonClient Message sent: MESSAGE
May  6 15:28:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:28:59 vdsm-fake journal: content-length:73
May  6 15:28:59 vdsm-fake journal:
May  6 15:28:59 vdsm-fake journal: <JsonRpcResponse id:
"e3c9144f-05b6-42bf-ba23-cf18a0bb0f0d" result: []>
May  6 15:29:13 vdsm-fake journal: 2017-05-06 12:29:13,223 Message SEND
May  6 15:29:13 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:13 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:13 vdsm-fake journal: content-length:98
May  6 15:29:13 vdsm-fake journal:
May  6 15:29:13 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"844a7ed4-d2a9-4ad1-904a-4b8f8e6cf491"}
May  6 15:29:13 vdsm-fake journal: 2017-05-06 12:29:13,224
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:13 vdsm-fake journal: 2017-05-06 12:29:13,328
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:13
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"13","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"10","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"19","cpuUser":"19","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"9"},"1":{"memFree":0,"memPercent":"9"}},"txRate":"","statsAge":"0.43","memUsed":"9","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073753"},"id":"844a7ed4-d2a9-4ad1-904a-4b8f8e6cf491"}
May  6 15:29:13 vdsm-fake journal: 2017-05-06 12:29:13,329 Message MESSAGE
May  6 15:29:13 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:13 vdsm-fake journal: content-length:2040
May  6 15:29:13 vdsm-fake journal:
May  6 15:29:13 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:13
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"13","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"10","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"19","cpuUser":"19","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"9"},"1":{"memFree":0,"memPercent":"9"}},"txRate":"","statsAge":"0.43","memUsed":"9","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073753"},"id":"844a7ed4-d2a9-4ad1-904a-4b8f8e6cf491"}
May  6 15:29:13 vdsm-fake journal: 2017-05-06 12:29:13,331
StompCommonClient Message sent: MESSAGE
May  6 15:29:13 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:13 vdsm-fake journal: content-length:2040
May  6 15:29:13 vdsm-fake journal:
May  6 15:29:13 vdsm-fake journal: <JsonRpcResponse id:
"844a7ed4-d2a9-4ad1-904a-4b8f8e6cf491" result:
{dateTime=2017-05-06T12:29:13 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=11, name=bond0, state=up, txDropped=0,
rxRate=13, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=14,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=13, name=bond1, state=up, txDropped=0, rxRate=11, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=14,
name=bond4, state=up, txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=10, name=em1, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=14, name=bond3,
state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=19, cpuUser=19,
memCommitted=0, cpuSys=13, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=81, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=9},
1={memFree=0, memPercent=9}}, txRate=, statsAge=0.43, memUsed=9,
vmActive=0, ksmCpu=0, elapsedTime=1494073753}>
May  6 15:29:14 vdsm-fake journal: 2017-05-06 12:29:14,127 Message SEND
May  6 15:29:14 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:14 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:14 vdsm-fake journal: content-length:103
May  6 15:29:14 vdsm-fake journal:
May  6 15:29:14 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"d29e6200-930f-4982-b1dc-900944564c7c"}
May  6 15:29:14 vdsm-fake journal: 2017-05-06 12:29:14,129
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:14 vdsm-fake journal: 2017-05-06 12:29:14,130
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"d29e6200-930f-4982-b1dc-900944564c7c"}
May  6 15:29:14 vdsm-fake journal: 2017-05-06 12:29:14,131 Message MESSAGE
May  6 15:29:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:14 vdsm-fake journal: content-length:73
May  6 15:29:14 vdsm-fake journal:
May  6 15:29:14 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"d29e6200-930f-4982-b1dc-900944564c7c"}
May  6 15:29:14 vdsm-fake journal: 2017-05-06 12:29:14,132
StompCommonClient Message sent: MESSAGE
May  6 15:29:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:14 vdsm-fake journal: content-length:73
May  6 15:29:14 vdsm-fake journal:
May  6 15:29:14 vdsm-fake journal: <JsonRpcResponse id:
"d29e6200-930f-4982-b1dc-900944564c7c" result: []>
May  6 15:29:28 vdsm-fake journal: 2017-05-06 12:29:28,461 Message SEND
May  6 15:29:28 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:28 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:28 vdsm-fake journal: content-length:98
May  6 15:29:28 vdsm-fake journal:
May  6 15:29:28 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"ec7a1c2a-ce1d-4866-9143-6a1511d0bd28"}
May  6 15:29:28 vdsm-fake journal: 2017-05-06 12:29:28,462
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:28 vdsm-fake journal: 2017-05-06 12:29:28,565
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:28
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"14","cpuUser":"19","memCommitted":0,"cpuSys":"12","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073768"},"id":"ec7a1c2a-ce1d-4866-9143-6a1511d0bd28"}
May  6 15:29:28 vdsm-fake journal: 2017-05-06 12:29:28,566 Message MESSAGE
May  6 15:29:28 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:28 vdsm-fake journal: content-length:2040
May  6 15:29:28 vdsm-fake journal:
May  6 15:29:28 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:28
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"14","cpuUser":"19","memCommitted":0,"cpuSys":"12","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073768"},"id":"ec7a1c2a-ce1d-4866-9143-6a1511d0bd28"}
May  6 15:29:28 vdsm-fake journal: 2017-05-06 12:29:28,567
StompCommonClient Message sent: MESSAGE
May  6 15:29:28 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:28 vdsm-fake journal: content-length:2040
May  6 15:29:28 vdsm-fake journal:
May  6 15:29:28 vdsm-fake journal: <JsonRpcResponse id:
"ec7a1c2a-ce1d-4866-9143-6a1511d0bd28" result:
{dateTime=2017-05-06T12:29:28 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=11, name=bond0, state=up, txDropped=0,
rxRate=13, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=10,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=11, name=bond1, state=up, txDropped=0, rxRate=11, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=11,
name=bond4, state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=13, name=em1, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=14, name=bond3,
state=up, txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=14, cpuUser=19,
memCommitted=0, cpuSys=12, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=81, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=5},
1={memFree=0, memPercent=5}}, txRate=, statsAge=0.43, memUsed=5,
vmActive=0, ksmCpu=0, elapsedTime=1494073768}>
May  6 15:29:29 vdsm-fake journal: 2017-05-06 12:29:29,147 Message SEND
May  6 15:29:29 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:29 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:29 vdsm-fake journal: content-length:103
May  6 15:29:29 vdsm-fake journal:
May  6 15:29:29 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"9945e318-d3ca-4459-b9c1-e00f308ec44e"}
May  6 15:29:29 vdsm-fake journal: 2017-05-06 12:29:29,150
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:29 vdsm-fake journal: 2017-05-06 12:29:29,152
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"9945e318-d3ca-4459-b9c1-e00f308ec44e"}
May  6 15:29:29 vdsm-fake journal: 2017-05-06 12:29:29,152 Message MESSAGE
May  6 15:29:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:29 vdsm-fake journal: content-length:73
May  6 15:29:29 vdsm-fake journal:
May  6 15:29:29 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"9945e318-d3ca-4459-b9c1-e00f308ec44e"}
May  6 15:29:29 vdsm-fake journal: 2017-05-06 12:29:29,153
StompCommonClient Message sent: MESSAGE
May  6 15:29:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:29 vdsm-fake journal: content-length:73
May  6 15:29:29 vdsm-fake journal:
May  6 15:29:29 vdsm-fake journal: <JsonRpcResponse id:
"9945e318-d3ca-4459-b9c1-e00f308ec44e" result: []>
May  6 15:29:43 vdsm-fake journal: 2017-05-06 12:29:43,665 Message SEND
May  6 15:29:43 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:43 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:43 vdsm-fake journal: content-length:98
May  6 15:29:43 vdsm-fake journal:
May  6 15:29:43 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"c5799cb6-5dba-40cb-8db6-da83c36aa73e"}
May  6 15:29:43 vdsm-fake journal: 2017-05-06 12:29:43,666
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:43 vdsm-fake journal: 2017-05-06 12:29:43,769
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:43
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"13","cpuUser":"16","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"84","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073783"},"id":"c5799cb6-5dba-40cb-8db6-da83c36aa73e"}
May  6 15:29:43 vdsm-fake journal: 2017-05-06 12:29:43,770 Message MESSAGE
May  6 15:29:43 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:43 vdsm-fake journal: content-length:2040
May  6 15:29:43 vdsm-fake journal:
May  6 15:29:43 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:43
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"13","cpuUser":"16","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"84","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073783"},"id":"c5799cb6-5dba-40cb-8db6-da83c36aa73e"}
May  6 15:29:43 vdsm-fake journal: 2017-05-06 12:29:43,771
StompCommonClient Message sent: MESSAGE
May  6 15:29:43 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:43 vdsm-fake journal: content-length:2040
May  6 15:29:43 vdsm-fake journal:
May  6 15:29:43 vdsm-fake journal: <JsonRpcResponse id:
"c5799cb6-5dba-40cb-8db6-da83c36aa73e" result:
{dateTime=2017-05-06T12:29:43 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=13, name=bond0, state=up, txDropped=0,
rxRate=12, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=13,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=12, name=bond1, state=up, txDropped=0, rxRate=10, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=14,
name=bond4, state=up, txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=11, name=em1, state=up,
txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=13, name=bond3,
state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=13, cpuUser=16,
memCommitted=0, cpuSys=13, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=84, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=8},
1={memFree=0, memPercent=8}}, txRate=, statsAge=0.43, memUsed=8,
vmActive=0, ksmCpu=0, elapsedTime=1494073783}>
May  6 15:29:44 vdsm-fake journal: 2017-05-06 12:29:44,165 Message SEND
May  6 15:29:44 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:44 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:44 vdsm-fake journal: content-length:103
May  6 15:29:44 vdsm-fake journal:
May  6 15:29:44 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"e621a217-45a8-4986-921c-09364dbcfa67"}
May  6 15:29:44 vdsm-fake journal: 2017-05-06 12:29:44,167
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:44 vdsm-fake journal: 2017-05-06 12:29:44,172
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"e621a217-45a8-4986-921c-09364dbcfa67"}
May  6 15:29:44 vdsm-fake journal: 2017-05-06 12:29:44,173 Message MESSAGE
May  6 15:29:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:44 vdsm-fake journal: content-length:73
May  6 15:29:44 vdsm-fake journal:
May  6 15:29:44 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"e621a217-45a8-4986-921c-09364dbcfa67"}
May  6 15:29:44 vdsm-fake journal: 2017-05-06 12:29:44,173
StompCommonClient Message sent: MESSAGE
May  6 15:29:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:44 vdsm-fake journal: content-length:73
May  6 15:29:44 vdsm-fake journal:
May  6 15:29:44 vdsm-fake journal: <JsonRpcResponse id:
"e621a217-45a8-4986-921c-09364dbcfa67" result: []>
May  6 15:29:58 vdsm-fake journal: 2017-05-06 12:29:58,864 Message SEND
May  6 15:29:58 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:58 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:58 vdsm-fake journal: content-length:98
May  6 15:29:58 vdsm-fake journal:
May  6 15:29:58 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"66594912-568d-4a70-bdb8-8ea9d97ea501"}
May  6 15:29:58 vdsm-fake journal: 2017-05-06 12:29:58,865
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:58 vdsm-fake journal: 2017-05-06 12:29:58,969
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:58
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"19","cpuUser":"19","memCommitted":0,"cpuSys":"12","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073798"},"id":"66594912-568d-4a70-bdb8-8ea9d97ea501"}
May  6 15:29:58 vdsm-fake journal: 2017-05-06 12:29:58,969 Message MESSAGE
May  6 15:29:58 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:58 vdsm-fake journal: content-length:2040
May  6 15:29:58 vdsm-fake journal:
May  6 15:29:58 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:29:58
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"19","cpuUser":"19","memCommitted":0,"cpuSys":"12","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073798"},"id":"66594912-568d-4a70-bdb8-8ea9d97ea501"}
May  6 15:29:58 vdsm-fake journal: 2017-05-06 12:29:58,970
StompCommonClient Message sent: MESSAGE
May  6 15:29:58 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:58 vdsm-fake journal: content-length:2040
May  6 15:29:58 vdsm-fake journal:
May  6 15:29:58 vdsm-fake journal: <JsonRpcResponse id:
"66594912-568d-4a70-bdb8-8ea9d97ea501" result:
{dateTime=2017-05-06T12:29:58 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=13, name=bond0, state=up, txDropped=0,
rxRate=13, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=12,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=10, name=bond1, state=up, txDropped=0, rxRate=11, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=14,
name=bond4, state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=12, name=em1, state=up,
txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=14, name=bond3,
state=up, txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=19, cpuUser=19,
memCommitted=0, cpuSys=12, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=81, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=8},
1={memFree=0, memPercent=8}}, txRate=, statsAge=0.43, memUsed=8,
vmActive=0, ksmCpu=0, elapsedTime=1494073798}>
May  6 15:29:59 vdsm-fake journal: 2017-05-06 12:29:59,182 Message SEND
May  6 15:29:59 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:29:59 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:29:59 vdsm-fake journal: content-length:103
May  6 15:29:59 vdsm-fake journal:
May  6 15:29:59 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"8b5c36be-2816-4d4e-b60f-f1e254174382"}
May  6 15:29:59 vdsm-fake journal: 2017-05-06 12:29:59,183
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:29:59 vdsm-fake journal: 2017-05-06 12:29:59,185
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"8b5c36be-2816-4d4e-b60f-f1e254174382"}
May  6 15:29:59 vdsm-fake journal: 2017-05-06 12:29:59,185 Message MESSAGE
May  6 15:29:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:59 vdsm-fake journal: content-length:73
May  6 15:29:59 vdsm-fake journal:
May  6 15:29:59 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"8b5c36be-2816-4d4e-b60f-f1e254174382"}
May  6 15:29:59 vdsm-fake journal: 2017-05-06 12:29:59,186
StompCommonClient Message sent: MESSAGE
May  6 15:29:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:29:59 vdsm-fake journal: content-length:73
May  6 15:29:59 vdsm-fake journal:
May  6 15:29:59 vdsm-fake journal: <JsonRpcResponse id:
"8b5c36be-2816-4d4e-b60f-f1e254174382" result: []>
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,027 Message SEND
May  6 15:30:02 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:02 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:02 vdsm-fake journal: content-length:105
May  6 15:30:02 vdsm-fake journal:
May  6 15:30:02 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getCapabilities","params":{},"id":"baee3a82-9a1f-4716-9997-4c83964f83ef"}
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,028
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,030
JsonRpcServer$MessageHandler Request is Host.getCapabilities got response
{"jsonrpc":"2.0","result":{"HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.example:ef52ec17bb0"}],"FC":[]},"vlans":{},"lastClientIface":"ovirtmgmt","cpuSpeed":"1200.000","autoNumaBalancing":"1","cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","reservedMem":"321","numaNodes":{"0":{"totalMemory":3988,"cpus":[1,3,5,7,9,11,13,15]},"1":{"totalMemory":3988,"cpus":[0,2,4,6,8,10,12,14]}},"selinux":{"mode":"1"},"packages2":{"qemu-kvm":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"libvirt":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"spice-server":{"release":"5.fc17","buildtime":"1336983054","version":"0.10.1"},"qemu-img":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"kernel":{"release":"5.fc17.x86_64","buildtime":"1357699251.0","version":"3.6.11"},"mom":{"release":"1.fc17","buildtime":"1354824066","version":"0.3.0"},"vdsm":{"release":"0.141.gita11e8f2.fc17","buildtime":"1359653302","version":"4.10.3"}},"networks":{"ovirtmgmt":{"iface":"ovirtmgmt","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"netmask":"255.255.252.0","bridged":true,"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"10.34.63.254","mtu":"1500","switch":"legacy"}},"uuid":"840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9","operatingSystem":{"release":"1","name":"Fedora","version":"17"},"management_ip":"","nics":{"em1":{"hwaddr":"53:5F:BD:FB:5F:78","cfg":{"BOOTPROTO":"dhcp","HWADDR":"53:5F:BD:FB:5F:78","DEVICE":"em1","ONBOOT":"yes","BRIDGE":"ovirtmgmt","UUID":"eb19ec8d-1ab7-455e-934e-097a6b198ecf","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes","NAME":"Boot
Disk"},"netmask":"","addr":"","speed":1000,"mtu":"1500"},"em2":{"hwaddr":"DB:F2:C8:76:DE:81","cfg":{"BOOTPROTO":"dhcp","HWADDR":"DB:F2:C8:76:DE:81","DEVICE":"em2","ONBOOT":"no","BRIDGE":"ovirtmgmt","UUID":"afd4d997-3e24-4e64-92cc-6306a8427d77","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes"},"netmask":"","addr":"","speed":1000,"mtu":"1500"}},"kvmEnabled":"true","lastClient":"10.36.6.76","software_version":"4.10","cpuThreads":"4","hooks":{},"numaNodeDistance":{"0":[10,20],"1":[20,10]},"netConfigDirty":"False","guestOverhead":"65","ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","rngSources":["RANDOM"],"bridges":{"ovirtmgmt":{"netmask":"255.255.252.0","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"96.127.51.142","mtu":"1500"}},"kdumpStatus":"1","cpuSockets":"1","supportedProtocols":["2.2","2.3"],"emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"software_revision":"0.141","version_name":"Snow
Man","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"vmTypes":["kvm"],"cpuCores":"4","bondings":{"bond0":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond2":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond1":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond4":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond3":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","memSize":"7976","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"]},"id":"baee3a82-9a1f-4716-9997-4c83964f83ef"}
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,031 Message MESSAGE
May  6 15:30:02 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:02 vdsm-fake journal: content-length:4222
May  6 15:30:02 vdsm-fake journal:
May  6 15:30:02 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"HBAInventory":{"iSCSI":[{"InitiatorName":"iqn.1994-05.com.example:ef52ec17bb0"}],"FC":[]},"vlans":{},"lastClientIface":"ovirtmgmt","cpuSpeed":"1200.000","autoNumaBalancing":"1","cpuModel":"Intel(R)
Xeon(R) CPU E5606 @
2.13GHz","reservedMem":"321","numaNodes":{"0":{"totalMemory":3988,"cpus":[1,3,5,7,9,11,13,15]},"1":{"totalMemory":3988,"cpus":[0,2,4,6,8,10,12,14]}},"selinux":{"mode":"1"},"packages2":{"qemu-kvm":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"libvirt":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"spice-server":{"release":"5.fc17","buildtime":"1336983054","version":"0.10.1"},"qemu-img":{"release":"2.fc17","buildtime":"1349642820","version":"1.0.1"},"kernel":{"release":"5.fc17.x86_64","buildtime":"1357699251.0","version":"3.6.11"},"mom":{"release":"1.fc17","buildtime":"1354824066","version":"0.3.0"},"vdsm":{"release":"0.141.gita11e8f2.fc17","buildtime":"1359653302","version":"4.10.3"}},"networks":{"ovirtmgmt":{"iface":"ovirtmgmt","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"netmask":"255.255.252.0","bridged":true,"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"10.34.63.254","mtu":"1500","switch":"legacy"}},"uuid":"840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9","operatingSystem":{"release":"1","name":"Fedora","version":"17"},"management_ip":"","nics":{"em1":{"hwaddr":"53:5F:BD:FB:5F:78","cfg":{"BOOTPROTO":"dhcp","HWADDR":"53:5F:BD:FB:5F:78","DEVICE":"em1","ONBOOT":"yes","BRIDGE":"ovirtmgmt","UUID":"eb19ec8d-1ab7-455e-934e-097a6b198ecf","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes","NAME":"Boot
Disk"},"netmask":"","addr":"","speed":1000,"mtu":"1500"},"em2":{"hwaddr":"DB:F2:C8:76:DE:81","cfg":{"BOOTPROTO":"dhcp","HWADDR":"DB:F2:C8:76:DE:81","DEVICE":"em2","ONBOOT":"no","BRIDGE":"ovirtmgmt","UUID":"afd4d997-3e24-4e64-92cc-6306a8427d77","NETBOOT":"yes","TYPE":"Ethernet","NM_CONTROLLED":"yes"},"netmask":"","addr":"","speed":1000,"mtu":"1500"}},"kvmEnabled":"true","lastClient":"10.36.6.76","software_version":"4.10","cpuThreads":"4","hooks":{},"numaNodeDistance":{"0":[10,20],"1":[20,10]},"netConfigDirty":"False","guestOverhead":"65","ISCSIInitiatorName":"iqn.1994-05.com.example:ef52ec17bb0","rngSources":["RANDOM"],"bridges":{"ovirtmgmt":{"netmask":"255.255.252.0","cfg":{"BOOTPROTO":"dhcp","DEVICE":"ovirtmgmt","ONBOOT":"yes","DELAY":"0","TYPE":"Ethernet"},"addr":"146.11.117.90","ports":["em1"],"stp":"off","gateway":"96.127.51.142","mtu":"1500"}},"kdumpStatus":"1","cpuSockets":"1","supportedProtocols":["2.2","2.3"],"emulatedMachines":["pc-0.10","pc-0.11","pc-0.12","pc-0.13","pc-0.14","pc-0.15","pc-1.0","pc-1.0","pc-i440fx-2.1","pseries-rhel7.2.0","pc-i440fx-rhel7.2.0","rhel6.4.0","rhel6.5.0","rhel6.6.0","rhel6.7.0","rhel6.8.0","rhel6.9.0","rhel7.0.0","rhel7.2.0","rhel7.5.0","pc","isapc"],"onlineCpus":[1,3,5,7,9,11,13,15,0,2,4,6,8,10,12,14],"software_revision":"0.141","version_name":"Snow
Man","supportedENGINEs":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"],"vmTypes":["kvm"],"cpuCores":"4","bondings":{"bond0":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond2":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond1":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond4":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"},"bond3":{"slaves":[],"hwaddr":"00:00:00:00:00:00","cfg":{},"netmask":"","addr":"","mtu":"150"}},"cpuFlags":"fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge","memSize":"7976","clusterLevels":["3.0","3.1","3.2","3.3","3.4","3.5","3.6","4.0","4.1"]},"id":"baee3a82-9a1f-4716-9997-4c83964f83ef"}
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,033
StompCommonClient Message sent: MESSAGE
May  6 15:30:02 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:02 vdsm-fake journal: content-length:4222
May  6 15:30:02 vdsm-fake journal:
May  6 15:30:02 vdsm-fake journal: <JsonRpcResponse id:
"baee3a82-9a1f-4716-9997-4c83964f83ef" result:
{HBAInventory={iSCSI=[{InitiatorName=iqn.1994-05.com.example:ef52ec17bb0}],
FC=[]}, vlans={}, lastClientIface=ovirtmgmt, cpuSpeed=1200.000,
autoNumaBalancing=1, cpuModel=Intel(R) Xeon(R) CPU E5606 @ 2.13GHz,
reservedMem=321, numaNodes={0={totalMemory=3988, cpus=[1, 3, 5, 7, 9, 11,
13, 15]}, 1={totalMemory=3988, cpus=[0, 2, 4, 6, 8, 10, 12, 14]}},
selinux={mode=1}, packages2={qemu-kvm={release=2.fc17,
buildtime=1349642820, version=1.0.1}, libvirt={release=2.fc17,
buildtime=1349642820, version=1.0.1}, spice-server={release=5.fc17,
buildtime=1336983054, version=0.10.1}, qemu-img={release=2.fc17,
buildtime=1349642820, version=1.0.1}, kernel={release=5.fc17.x86_64,
buildtime=1357699251.0, version=3.6.11}, mom={release=1.fc17,
buildtime=1354824066, version=0.3.0}, vdsm={release=0.141.gita11e8f2.fc17,
buildtime=1359653302, version=4.10.3}},
networks={ovirtmgmt={iface=ovirtmgmt, cfg={BOOTPROTO=dhcp,
DEVICE=ovirtmgmt, ONBOOT=yes, DELAY=0, TYPE=Ethernet},
netmask=255.255.252.0, bridged=true, addr=146.11.117.90, ports=[em1],
stp=off, gateway=10.34.63.254, mtu=1500, switch=legacy}},
uuid=840d548e-03d3-4425-bb69-fe6b5886929a_80:A9:FE:B3:73:B0:E9,
operatingSystem={release=1, name=Fedora, version=17}, management_ip=,
nics={em1={hwaddr=53:5F:BD:FB:5F:78, cfg={BOOTPROTO=dhcp,
HWADDR=53:5F:BD:FB:5F:78, DEVICE=em1, ONBOOT=yes, BRIDGE=ovirtmgmt,
UUID=eb19ec8d-1ab7-455e-934e-097a6b198ecf, NETBOOT=yes, TYPE=Ethernet,
NM_CONTROLLED=yes, NAME=Boot Disk}, netmask=, addr=, speed=1000, mtu=1500},
em2={hwaddr=DB:F2:C8:76:DE:81, cfg={BOOTPROTO=dhcp,
HWADDR=DB:F2:C8:76:DE:81, DEVICE=em2, ONBOOT=no, BRIDGE=ovirtmgmt,
UUID=afd4d997-3e24-4e64-92cc-6306a8427d77, NETBOOT=yes, TYPE=Ethernet,
NM_CONTROLLED=yes}, netmask=, addr=, speed=1000, mtu=1500}},
kvmEnabled=true, lastClient=10.36.6.76, software_version=4.10,
cpuThreads=4, hooks={}, numaNodeDistance={0=[10, 20], 1=[20, 10]},
netConfigDirty=False, guestOverhead=65,
ISCSIInitiatorName=iqn.1994-05.com.example:ef52ec17bb0,
rngSources=[RANDOM], bridges={ovirtmgmt={netmask=255.255.252.0,
cfg={BOOTPROTO=dhcp, DEVICE=ovirtmgmt, ONBOOT=yes, DELAY=0, TYPE=Ethernet},
addr=146.11.117.90, ports=[em1], stp=off, gateway=96.127.51.142,
mtu=1500}}, kdumpStatus=1, cpuSockets=1, supportedProtocols=[2.2, 2.3],
emulatedMachines=[pc-0.10, pc-0.11, pc-0.12, pc-0.13, pc-0.14, pc-0.15,
pc-1.0, pc-1.0, pc-i440fx-2.1, pseries-rhel7.2.0, pc-i440fx-rhel7.2.0,
rhel6.4.0, rhel6.5.0, rhel6.6.0, rhel6.7.0, rhel6.8.0, rhel6.9.0,
rhel7.0.0, rhel7.2.0, rhel7.5.0, pc, isapc], onlineCpus=[1, 3, 5, 7, 9, 11,
13, 15, 0, 2, 4, 6, 8, 10, 12, 14], software_revision=0.141,
version_name=Snow Man, supportedENGINEs=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6,
4.0, 4.1], vmTypes=[kvm], cpuCores=4, bondings={bond0={slaves=[],
hwaddr=00:00:00:00:00:00, cfg={}, netmask=, addr=, mtu=150},
bond2={slaves=[], hwaddr=00:00:00:00:00:00, cfg={}, netmask=, addr=,
mtu=150}, bond1={slaves=[], hwaddr=00:00:00:00:00:00, cfg={}, netmask=,
addr=, mtu=150}, bond4={slaves=[], hwaddr=00:00:00:00:00:00, cfg={},
netmask=, addr=, mtu=150}, bond3={slaves=[], hwaddr=00:00:00:00:00:00,
cfg={}, netmask=, addr=, mtu=150}},
cpuFlags=fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ss,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,rep_good,nopl,eagerfpu,pni,pclmulqdq,vmx,ssse3,fma,cx16,pcid,sse4_1,sse4_2,x2apic,movbe,popcnt,tsc_deadline_timer,aes,xsave,avx,f16c,rdrand,hypervisor,lahf_lm,abm,tpr_shadow,vnmi,flexpriority,ept,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,xsaveopt,model_Haswell-noTSX,model_Nehalem,model_Conroe,model_Penryn,model_Westmere,model_SandyBridge,
memSize=7976, clusterLevels=[3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 4.0, 4.1]}>
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,083 Message SEND
May  6 15:30:02 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:02 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:02 vdsm-fake journal: content-length:105
May  6 15:30:02 vdsm-fake journal:
May  6 15:30:02 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getHardwareInfo","params":{},"id":"91d84ea0-4e61-4eaa-bc88-6ac00b01c5dd"}
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,084
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,086
JsonRpcServer$MessageHandler Request is Host.getHardwareInfo got response
{"jsonrpc":"2.0","result":{"systemFamily":"","systemSerialNumber":"CZJ2320M6N","systemProductName":"ProLiant
DL160
G6","systemManufacturer":"HP","systemUUID":"840d548e-03d3-4425-bb69-fe6b5886929a","systemVersion":""},"id":"91d84ea0-4e61-4eaa-bc88-6ac00b01c5dd"}
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,086 Message MESSAGE
May  6 15:30:02 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:02 vdsm-fake journal: content-length:261
May  6 15:30:02 vdsm-fake journal:
May  6 15:30:02 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"systemFamily":"","systemSerialNumber":"CZJ2320M6N","systemProductName":"ProLiant
DL160
G6","systemManufacturer":"HP","systemUUID":"840d548e-03d3-4425-bb69-fe6b5886929a","systemVersion":""},"id":"91d84ea0-4e61-4eaa-bc88-6ac00b01c5dd"}
May  6 15:30:02 vdsm-fake journal: 2017-05-06 12:30:02,087
StompCommonClient Message sent: MESSAGE
May  6 15:30:02 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:02 vdsm-fake journal: content-length:261
May  6 15:30:02 vdsm-fake journal:
May  6 15:30:02 vdsm-fake journal: <JsonRpcResponse id:
"91d84ea0-4e61-4eaa-bc88-6ac00b01c5dd" result: {systemFamily=,
systemSerialNumber=CZJ2320M6N, systemProductName=ProLiant DL160 G6,
systemManufacturer=HP, systemUUID=840d548e-03d3-4425-bb69-fe6b5886929a,
systemVersion=}>
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,194 Message SEND
May  6 15:30:14 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:14 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:14 vdsm-fake journal: content-length:103
May  6 15:30:14 vdsm-fake journal:
May  6 15:30:14 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"69d2bf07-cc42-4977-bf61-00ccf3ab3701"}
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,195
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,204
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"69d2bf07-cc42-4977-bf61-00ccf3ab3701"}
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,204 Message MESSAGE
May  6 15:30:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:14 vdsm-fake journal: content-length:73
May  6 15:30:14 vdsm-fake journal:
May  6 15:30:14 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"69d2bf07-cc42-4977-bf61-00ccf3ab3701"}
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,205
StompCommonClient Message sent: MESSAGE
May  6 15:30:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:14 vdsm-fake journal: content-length:73
May  6 15:30:14 vdsm-fake journal:
May  6 15:30:14 vdsm-fake journal: <JsonRpcResponse id:
"69d2bf07-cc42-4977-bf61-00ccf3ab3701" result: []>
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,458 Message SEND
May  6 15:30:14 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:14 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:14 vdsm-fake journal: content-length:98
May  6 15:30:14 vdsm-fake journal:
May  6 15:30:14 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"95f967ce-0bbe-4808-9d9c-269c8e284e97"}
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,459
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,561
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:30:14
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"14","name":"bond1","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"15","cpuUser":"11","memCommitted":0,"cpuSys":"19","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"89","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"7"},"1":{"memFree":0,"memPercent":"7"}},"txRate":"","statsAge":"0.43","memUsed":"7","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073814"},"id":"95f967ce-0bbe-4808-9d9c-269c8e284e97"}
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,562 Message MESSAGE
May  6 15:30:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:14 vdsm-fake journal: content-length:2040
May  6 15:30:14 vdsm-fake journal:
May  6 15:30:14 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:30:14
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"14","name":"bond1","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"15","cpuUser":"11","memCommitted":0,"cpuSys":"19","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"89","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"7"},"1":{"memFree":0,"memPercent":"7"}},"txRate":"","statsAge":"0.43","memUsed":"7","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073814"},"id":"95f967ce-0bbe-4808-9d9c-269c8e284e97"}
May  6 15:30:14 vdsm-fake journal: 2017-05-06 12:30:14,563
StompCommonClient Message sent: MESSAGE
May  6 15:30:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:14 vdsm-fake journal: content-length:2040
May  6 15:30:14 vdsm-fake journal:
May  6 15:30:14 vdsm-fake journal: <JsonRpcResponse id:
"95f967ce-0bbe-4808-9d9c-269c8e284e97" result:
{dateTime=2017-05-06T12:30:14 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=12, name=bond0, state=up, txDropped=0,
rxRate=10, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=12,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=14, name=bond1, state=up, txDropped=0, rxRate=14, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=11,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=11, name=em1, state=up,
txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=13, name=bond3,
state=up, txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=15, cpuUser=11,
memCommitted=0, cpuSys=19, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=89, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=7},
1={memFree=0, memPercent=7}}, txRate=, statsAge=0.43, memUsed=7,
vmActive=0, ksmCpu=0, elapsedTime=1494073814}>
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,217 Message SEND
May  6 15:30:29 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:29 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:29 vdsm-fake journal: content-length:103
May  6 15:30:29 vdsm-fake journal:
May  6 15:30:29 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"5b5b88a3-3231-4a2e-8401-64801f44b7dc"}
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,219
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,221
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"5b5b88a3-3231-4a2e-8401-64801f44b7dc"}
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,222 Message MESSAGE
May  6 15:30:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:29 vdsm-fake journal: content-length:73
May  6 15:30:29 vdsm-fake journal:
May  6 15:30:29 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"5b5b88a3-3231-4a2e-8401-64801f44b7dc"}
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,222
StompCommonClient Message sent: MESSAGE
May  6 15:30:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:29 vdsm-fake journal: content-length:73
May  6 15:30:29 vdsm-fake journal:
May  6 15:30:29 vdsm-fake journal: <JsonRpcResponse id:
"5b5b88a3-3231-4a2e-8401-64801f44b7dc" result: []>
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,665 Message SEND
May  6 15:30:29 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:29 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:29 vdsm-fake journal: content-length:98
May  6 15:30:29 vdsm-fake journal:
May  6 15:30:29 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"d91a75b3-4c86-403e-9eb4-7a714f6a8722"}
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,667
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,771
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:30:29
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"14","name":"bond0","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"14","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"13","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"10","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"13","cpuUser":"13","memCommitted":0,"cpuSys":"12","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"87","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073829"},"id":"d91a75b3-4c86-403e-9eb4-7a714f6a8722"}
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,771 Message MESSAGE
May  6 15:30:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:29 vdsm-fake journal: content-length:2040
May  6 15:30:29 vdsm-fake journal:
May  6 15:30:29 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:30:29
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"14","name":"bond0","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"14","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"13","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"10","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"13","cpuUser":"13","memCommitted":0,"cpuSys":"12","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"87","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073829"},"id":"d91a75b3-4c86-403e-9eb4-7a714f6a8722"}
May  6 15:30:29 vdsm-fake journal: 2017-05-06 12:30:29,772
StompCommonClient Message sent: MESSAGE
May  6 15:30:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:29 vdsm-fake journal: content-length:2040
May  6 15:30:29 vdsm-fake journal:
May  6 15:30:29 vdsm-fake journal: <JsonRpcResponse id:
"d91a75b3-4c86-403e-9eb4-7a714f6a8722" result:
{dateTime=2017-05-06T12:30:29 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=14, name=bond0, state=up, txDropped=0,
rxRate=12, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=14, name=bond2, state=up, txDropped=0, rxRate=11,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=11, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=13,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=12, name=em1, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=14, name=bond3,
state=up, txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=10, name=em2, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=13, cpuUser=13,
memCommitted=0, cpuSys=12, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=87, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=8},
1={memFree=0, memPercent=8}}, txRate=, statsAge=0.43, memUsed=8,
vmActive=0, ksmCpu=0, elapsedTime=1494073829}>
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,230 Message SEND
May  6 15:30:44 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:44 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:44 vdsm-fake journal: content-length:103
May  6 15:30:44 vdsm-fake journal:
May  6 15:30:44 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"63928536-c2ed-4b95-915d-ff4430e5c516"}
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,234
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,235
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"63928536-c2ed-4b95-915d-ff4430e5c516"}
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,236 Message MESSAGE
May  6 15:30:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:44 vdsm-fake journal: content-length:73
May  6 15:30:44 vdsm-fake journal:
May  6 15:30:44 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"63928536-c2ed-4b95-915d-ff4430e5c516"}
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,236
StompCommonClient Message sent: MESSAGE
May  6 15:30:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:44 vdsm-fake journal: content-length:73
May  6 15:30:44 vdsm-fake journal:
May  6 15:30:44 vdsm-fake journal: <JsonRpcResponse id:
"63928536-c2ed-4b95-915d-ff4430e5c516" result: []>
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,874 Message SEND
May  6 15:30:44 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:44 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:44 vdsm-fake journal: content-length:98
May  6 15:30:44 vdsm-fake journal:
May  6 15:30:44 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"4ca50715-9cbe-4c37-af3f-703f59eec6b9"}
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,875
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,979
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:30:44
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"12","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"10","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"11","name":"bond3","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"10","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"11","cpuUser":"12","memCommitted":0,"cpuSys":"10","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"88","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073844"},"id":"4ca50715-9cbe-4c37-af3f-703f59eec6b9"}
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,980 Message MESSAGE
May  6 15:30:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:44 vdsm-fake journal: content-length:2040
May  6 15:30:44 vdsm-fake journal:
May  6 15:30:44 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:30:44
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"12","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"11","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"10","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"13","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"11","name":"bond3","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"10","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"11","cpuUser":"12","memCommitted":0,"cpuSys":"10","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"88","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"8"},"1":{"memFree":0,"memPercent":"8"}},"txRate":"","statsAge":"0.43","memUsed":"8","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073844"},"id":"4ca50715-9cbe-4c37-af3f-703f59eec6b9"}
May  6 15:30:44 vdsm-fake journal: 2017-05-06 12:30:44,981
StompCommonClient Message sent: MESSAGE
May  6 15:30:45 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:45 vdsm-fake journal: content-length:2040
May  6 15:30:45 vdsm-fake journal:
May  6 15:30:45 vdsm-fake journal: <JsonRpcResponse id:
"4ca50715-9cbe-4c37-af3f-703f59eec6b9" result:
{dateTime=2017-05-06T12:30:44 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=12, name=bond0, state=up, txDropped=0,
rxRate=14, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=12, name=bond2, state=up, txDropped=0, rxRate=11,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=11, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=10,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=13, name=em1, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=11, name=bond3,
state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=10, name=em2, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=11, cpuUser=12,
memCommitted=0, cpuSys=10, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=88, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=8},
1={memFree=0, memPercent=8}}, txRate=, statsAge=0.43, memUsed=8,
vmActive=0, ksmCpu=0, elapsedTime=1494073844}>
May  6 15:30:59 vdsm-fake journal: 2017-05-06 12:30:59,245 Message SEND
May  6 15:30:59 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:30:59 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:30:59 vdsm-fake journal: content-length:103
May  6 15:30:59 vdsm-fake journal:
May  6 15:30:59 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"61a9a062-617e-43dc-8479-800409dde43c"}
May  6 15:30:59 vdsm-fake journal: 2017-05-06 12:30:59,246
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:30:59 vdsm-fake journal: 2017-05-06 12:30:59,250
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"61a9a062-617e-43dc-8479-800409dde43c"}
May  6 15:30:59 vdsm-fake journal: 2017-05-06 12:30:59,250 Message MESSAGE
May  6 15:30:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:59 vdsm-fake journal: content-length:73
May  6 15:30:59 vdsm-fake journal:
May  6 15:30:59 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"61a9a062-617e-43dc-8479-800409dde43c"}
May  6 15:30:59 vdsm-fake journal: 2017-05-06 12:30:59,251
StompCommonClient Message sent: MESSAGE
May  6 15:30:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:30:59 vdsm-fake journal: content-length:73
May  6 15:30:59 vdsm-fake journal:
May  6 15:30:59 vdsm-fake journal: <JsonRpcResponse id:
"61a9a062-617e-43dc-8479-800409dde43c" result: []>
May  6 15:31:00 vdsm-fake journal: 2017-05-06 12:31:00,065 Message SEND
May  6 15:31:00 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:00 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:00 vdsm-fake journal: content-length:98
May  6 15:31:00 vdsm-fake journal:
May  6 15:31:00 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"68177541-af46-41c8-954d-4e926a1998e6"}
May  6 15:31:00 vdsm-fake journal: 2017-05-06 12:31:00,066
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:00 vdsm-fake journal: 2017-05-06 12:31:00,170
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:00
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"10","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"14","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"12","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"16","cpuUser":"13","memCommitted":0,"cpuSys":"15","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"87","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073860"},"id":"68177541-af46-41c8-954d-4e926a1998e6"}
May  6 15:31:00 vdsm-fake journal: 2017-05-06 12:31:00,171 Message MESSAGE
May  6 15:31:00 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:00 vdsm-fake journal: content-length:2040
May  6 15:31:00 vdsm-fake journal:
May  6 15:31:00 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:00
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"10","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"14","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"12","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"16","cpuUser":"13","memCommitted":0,"cpuSys":"15","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"87","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073860"},"id":"68177541-af46-41c8-954d-4e926a1998e6"}
May  6 15:31:00 vdsm-fake journal: 2017-05-06 12:31:00,172
StompCommonClient Message sent: MESSAGE
May  6 15:31:00 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:00 vdsm-fake journal: content-length:2040
May  6 15:31:00 vdsm-fake journal:
May  6 15:31:00 vdsm-fake journal: <JsonRpcResponse id:
"68177541-af46-41c8-954d-4e926a1998e6" result:
{dateTime=2017-05-06T12:31:00 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=10, name=bond0, state=up, txDropped=0,
rxRate=14, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=14, name=bond2, state=up, txDropped=0, rxRate=13,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=10, name=bond1, state=up, txDropped=0, rxRate=10, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=14,
name=bond4, state=up, txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=11, name=em1, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=12, name=bond3,
state=up, txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=16, cpuUser=13,
memCommitted=0, cpuSys=15, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=87, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=5},
1={memFree=0, memPercent=5}}, txRate=, statsAge=0.43, memUsed=5,
vmActive=0, ksmCpu=0, elapsedTime=1494073860}>
May  6 15:31:14 vdsm-fake journal: 2017-05-06 12:31:14,265 Message SEND
May  6 15:31:14 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:14 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:14 vdsm-fake journal: content-length:103
May  6 15:31:14 vdsm-fake journal:
May  6 15:31:14 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"cce50622-23cf-4e78-80f5-ca9c64dd6a6c"}
May  6 15:31:14 vdsm-fake journal: 2017-05-06 12:31:14,269
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:14 vdsm-fake journal: 2017-05-06 12:31:14,271
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"cce50622-23cf-4e78-80f5-ca9c64dd6a6c"}
May  6 15:31:14 vdsm-fake journal: 2017-05-06 12:31:14,271 Message MESSAGE
May  6 15:31:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:14 vdsm-fake journal: content-length:73
May  6 15:31:14 vdsm-fake journal:
May  6 15:31:14 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"cce50622-23cf-4e78-80f5-ca9c64dd6a6c"}
May  6 15:31:14 vdsm-fake journal: 2017-05-06 12:31:14,272
StompCommonClient Message sent: MESSAGE
May  6 15:31:14 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:14 vdsm-fake journal: content-length:73
May  6 15:31:14 vdsm-fake journal:
May  6 15:31:14 vdsm-fake journal: <JsonRpcResponse id:
"cce50622-23cf-4e78-80f5-ca9c64dd6a6c" result: []>
May  6 15:31:15 vdsm-fake journal: 2017-05-06 12:31:15,259 Message SEND
May  6 15:31:15 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:15 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:15 vdsm-fake journal: content-length:98
May  6 15:31:15 vdsm-fake journal:
May  6 15:31:15 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"8c87dafa-b173-4a7b-ae66-47bba3b4ec75"}
May  6 15:31:15 vdsm-fake journal: 2017-05-06 12:31:15,260
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:15 vdsm-fake journal: 2017-05-06 12:31:15,362
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:15
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"14","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"13","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"14","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"12","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"12","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"18","cpuUser":"12","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"88","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073875"},"id":"8c87dafa-b173-4a7b-ae66-47bba3b4ec75"}
May  6 15:31:15 vdsm-fake journal: 2017-05-06 12:31:15,363 Message MESSAGE
May  6 15:31:15 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:15 vdsm-fake journal: content-length:2040
May  6 15:31:15 vdsm-fake journal:
May  6 15:31:15 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:15
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"14","name":"bond0","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"13","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"11","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"14","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"12","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"12","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"18","cpuUser":"12","memCommitted":0,"cpuSys":"13","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"88","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073875"},"id":"8c87dafa-b173-4a7b-ae66-47bba3b4ec75"}
May  6 15:31:15 vdsm-fake journal: 2017-05-06 12:31:15,363
StompCommonClient Message sent: MESSAGE
May  6 15:31:15 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:15 vdsm-fake journal: content-length:2040
May  6 15:31:15 vdsm-fake journal:
May  6 15:31:15 vdsm-fake journal: <JsonRpcResponse id:
"8c87dafa-b173-4a7b-ae66-47bba3b4ec75" result:
{dateTime=2017-05-06T12:31:15 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=14, name=bond0, state=up, txDropped=0,
rxRate=14, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=13, name=bond2, state=up, txDropped=0, rxRate=13,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=12, name=bond1, state=up, txDropped=0, rxRate=12, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=11,
name=bond4, state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=14, name=em1, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=12, name=bond3,
state=up, txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=12, name=em2, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=18, cpuUser=12,
memCommitted=0, cpuSys=13, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=88, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=5},
1={memFree=0, memPercent=5}}, txRate=, statsAge=0.43, memUsed=5,
vmActive=0, ksmCpu=0, elapsedTime=1494073875}>
May  6 15:31:29 vdsm-fake journal: 2017-05-06 12:31:29,283 Message SEND
May  6 15:31:29 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:29 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:29 vdsm-fake journal: content-length:103
May  6 15:31:29 vdsm-fake journal:
May  6 15:31:29 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"091cbf86-937d-4e53-8595-8433daf5119d"}
May  6 15:31:29 vdsm-fake journal: 2017-05-06 12:31:29,285
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:29 vdsm-fake journal: 2017-05-06 12:31:29,286
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"091cbf86-937d-4e53-8595-8433daf5119d"}
May  6 15:31:29 vdsm-fake journal: 2017-05-06 12:31:29,287 Message MESSAGE
May  6 15:31:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:29 vdsm-fake journal: content-length:73
May  6 15:31:29 vdsm-fake journal:
May  6 15:31:29 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"091cbf86-937d-4e53-8595-8433daf5119d"}
May  6 15:31:29 vdsm-fake journal: 2017-05-06 12:31:29,287
StompCommonClient Message sent: MESSAGE
May  6 15:31:29 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:29 vdsm-fake journal: content-length:73
May  6 15:31:29 vdsm-fake journal:
May  6 15:31:29 vdsm-fake journal: <JsonRpcResponse id:
"091cbf86-937d-4e53-8595-8433daf5119d" result: []>
May  6 15:31:30 vdsm-fake journal: 2017-05-06 12:31:30,423 Message SEND
May  6 15:31:30 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:30 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:30 vdsm-fake journal: content-length:98
May  6 15:31:30 vdsm-fake journal:
May  6 15:31:30 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"4b1581a3-dfdc-4107-9b39-08c925eb1461"}
May  6 15:31:30 vdsm-fake journal: 2017-05-06 12:31:30,424
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:30 vdsm-fake journal: 2017-05-06 12:31:30,528
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:30
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"13","name":"bond1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"10","cpuUser":"19","memCommitted":0,"cpuSys":"11","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"6"},"1":{"memFree":0,"memPercent":"6"}},"txRate":"","statsAge":"0.43","memUsed":"6","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073890"},"id":"4b1581a3-dfdc-4107-9b39-08c925eb1461"}
May  6 15:31:30 vdsm-fake journal: 2017-05-06 12:31:30,530 Message MESSAGE
May  6 15:31:30 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:30 vdsm-fake journal: content-length:2040
May  6 15:31:30 vdsm-fake journal:
May  6 15:31:30 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:30
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"11","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"13","name":"bond1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"14","name":"bond4","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"14","name":"em2","state":"up","txDropped":"0","rxRate":"14","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"10","cpuUser":"19","memCommitted":0,"cpuSys":"11","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"81","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"6"},"1":{"memFree":0,"memPercent":"6"}},"txRate":"","statsAge":"0.43","memUsed":"6","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073890"},"id":"4b1581a3-dfdc-4107-9b39-08c925eb1461"}
May  6 15:31:30 vdsm-fake journal: 2017-05-06 12:31:30,530
StompCommonClient Message sent: MESSAGE
May  6 15:31:30 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:30 vdsm-fake journal: content-length:2040
May  6 15:31:30 vdsm-fake journal:
May  6 15:31:30 vdsm-fake journal: <JsonRpcResponse id:
"4b1581a3-dfdc-4107-9b39-08c925eb1461" result:
{dateTime=2017-05-06T12:31:30 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=11, name=bond0, state=up, txDropped=0,
rxRate=11, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=11,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=13, name=bond1, state=up, txDropped=0, rxRate=13, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=14,
name=bond4, state=up, txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=11, name=em1, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=13, name=bond3,
state=up, txDropped=0, rxRate=10, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=14, name=em2, state=up,
txDropped=0, rxRate=14, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=10, cpuUser=19,
memCommitted=0, cpuSys=11, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=81, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=6},
1={memFree=0, memPercent=6}}, txRate=, statsAge=0.43, memUsed=6,
vmActive=0, ksmCpu=0, elapsedTime=1494073890}>
May  6 15:31:44 vdsm-fake journal: 2017-05-06 12:31:44,299 Message SEND
May  6 15:31:44 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:44 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:44 vdsm-fake journal: content-length:103
May  6 15:31:44 vdsm-fake journal:
May  6 15:31:44 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"233e4c3e-90ea-48ae-b3ec-cb5896a3503a"}
May  6 15:31:44 vdsm-fake journal: 2017-05-06 12:31:44,301
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:44 vdsm-fake journal: 2017-05-06 12:31:44,303
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"233e4c3e-90ea-48ae-b3ec-cb5896a3503a"}
May  6 15:31:44 vdsm-fake journal: 2017-05-06 12:31:44,304 Message MESSAGE
May  6 15:31:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:44 vdsm-fake journal: content-length:73
May  6 15:31:44 vdsm-fake journal:
May  6 15:31:44 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"233e4c3e-90ea-48ae-b3ec-cb5896a3503a"}
May  6 15:31:44 vdsm-fake journal: 2017-05-06 12:31:44,304
StompCommonClient Message sent: MESSAGE
May  6 15:31:44 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:44 vdsm-fake journal: content-length:73
May  6 15:31:44 vdsm-fake journal:
May  6 15:31:44 vdsm-fake journal: <JsonRpcResponse id:
"233e4c3e-90ea-48ae-b3ec-cb5896a3503a" result: []>
May  6 15:31:45 vdsm-fake journal: 2017-05-06 12:31:45,619 Message SEND
May  6 15:31:45 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:45 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:45 vdsm-fake journal: content-length:98
May  6 15:31:45 vdsm-fake journal:
May  6 15:31:45 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"672b7eb0-3cdc-4a3a-840d-262ec92c4041"}
May  6 15:31:45 vdsm-fake journal: 2017-05-06 12:31:45,620
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:45 vdsm-fake journal: 2017-05-06 12:31:45,723
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:45
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"12","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"12","name":"em2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"18","cpuUser":"10","memCommitted":0,"cpuSys":"14","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"90","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"9"},"1":{"memFree":0,"memPercent":"9"}},"txRate":"","statsAge":"0.43","memUsed":"9","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073905"},"id":"672b7eb0-3cdc-4a3a-840d-262ec92c4041"}
May  6 15:31:45 vdsm-fake journal: 2017-05-06 12:31:45,723 Message MESSAGE
May  6 15:31:45 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:45 vdsm-fake journal: content-length:2040
May  6 15:31:45 vdsm-fake journal:
May  6 15:31:45 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:31:45
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"13","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"12","name":"bond1","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"12","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"12","name":"em1","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"13","name":"bond3","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"12","name":"em2","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"18","cpuUser":"10","memCommitted":0,"cpuSys":"14","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"90","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"9"},"1":{"memFree":0,"memPercent":"9"}},"txRate":"","statsAge":"0.43","memUsed":"9","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073905"},"id":"672b7eb0-3cdc-4a3a-840d-262ec92c4041"}
May  6 15:31:45 vdsm-fake journal: 2017-05-06 12:31:45,724
StompCommonClient Message sent: MESSAGE
May  6 15:31:45 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:45 vdsm-fake journal: content-length:2040
May  6 15:31:45 vdsm-fake journal:
May  6 15:31:45 vdsm-fake journal: <JsonRpcResponse id:
"672b7eb0-3cdc-4a3a-840d-262ec92c4041" result:
{dateTime=2017-05-06T12:31:45 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=13, name=bond0, state=up, txDropped=0,
rxRate=11, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=10,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=12, name=bond1, state=up, txDropped=0, rxRate=11, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=12,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=12, name=em1, state=up,
txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=13, name=bond3,
state=up, txDropped=0, rxRate=13, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=12, name=em2, state=up,
txDropped=0, rxRate=11, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=18, cpuUser=10,
memCommitted=0, cpuSys=14, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=90, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=9},
1={memFree=0, memPercent=9}}, txRate=, statsAge=0.43, memUsed=9,
vmActive=0, ksmCpu=0, elapsedTime=1494073905}>
May  6 15:31:59 vdsm-fake journal: 2017-05-06 12:31:59,313 Message SEND
May  6 15:31:59 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:31:59 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:31:59 vdsm-fake journal: content-length:103
May  6 15:31:59 vdsm-fake journal:
May  6 15:31:59 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getAllVmStats","params":{},"id":"0bf7be43-f620-407f-ae93-6e958489e6b8"}
May  6 15:31:59 vdsm-fake journal: 2017-05-06 12:31:59,322
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:31:59 vdsm-fake journal: 2017-05-06 12:31:59,325
JsonRpcServer$MessageHandler Request is Host.getAllVmStats got response
{"jsonrpc":"2.0","result":[],"id":"0bf7be43-f620-407f-ae93-6e958489e6b8"}
May  6 15:31:59 vdsm-fake journal: 2017-05-06 12:31:59,326 Message MESSAGE
May  6 15:31:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:59 vdsm-fake journal: content-length:73
May  6 15:31:59 vdsm-fake journal:
May  6 15:31:59 vdsm-fake journal:
{"jsonrpc":"2.0","result":[],"id":"0bf7be43-f620-407f-ae93-6e958489e6b8"}
May  6 15:31:59 vdsm-fake journal: 2017-05-06 12:31:59,326
StompCommonClient Message sent: MESSAGE
May  6 15:31:59 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:31:59 vdsm-fake journal: content-length:73
May  6 15:31:59 vdsm-fake journal:
May  6 15:31:59 vdsm-fake journal: <JsonRpcResponse id:
"0bf7be43-f620-407f-ae93-6e958489e6b8" result: []>
May  6 15:32:00 vdsm-fake journal: 2017-05-06 12:32:00,802 Message SEND
May  6 15:32:00 vdsm-fake journal: destination:jms.topic.vdsm_requests
May  6 15:32:00 vdsm-fake journal: reply-to:jms.topic.vdsm_responses
May  6 15:32:00 vdsm-fake journal: content-length:98
May  6 15:32:00 vdsm-fake journal:
May  6 15:32:00 vdsm-fake journal:
{"jsonrpc":"2.0","method":"Host.getStats","params":{},"id":"033f17fd-1519-4623-9041-226203d128ba"}
May  6 15:32:00 vdsm-fake journal: 2017-05-06 12:32:00,803
JsonRpcServer$MessageHandler client policy identifier test3
May  6 15:32:00 vdsm-fake journal: 2017-05-06 12:32:00,906
JsonRpcServer$MessageHandler Request is Host.getStats got response
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:32:00
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"13","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"12","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"19","cpuUser":"14","memCommitted":0,"cpuSys":"14","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"86","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073920"},"id":"033f17fd-1519-4623-9041-226203d128ba"}
May  6 15:32:00 vdsm-fake journal: 2017-05-06 12:32:00,906 Message MESSAGE
May  6 15:32:00 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:32:00 vdsm-fake journal: content-length:2040
May  6 15:32:00 vdsm-fake journal:
May  6 15:32:00 vdsm-fake journal:
{"jsonrpc":"2.0","result":{"dateTime":"2017-05-06T12:32:00
GMT","generationID":"f554221c-bbe4-4ad1-8cdf-11eff9359359","thpState":"always","cpuSysVdsmd":"0.25","anonHugePages":"662","rxRate":"0.00","txDropped":"0","network":{"bond0":{"txErrors":"0","txRate":"12","name":"bond0","state":"up","txDropped":"0","rxRate":"11","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond2":{"txErrors":"0","txRate":"10","name":"bond2","state":"up","txDropped":"0","rxRate":"13","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond1":{"txErrors":"0","txRate":"10","name":"bond1","state":"up","txDropped":"0","rxRate":"10","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond4":{"txErrors":"0","txRate":"13","name":"bond4","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em1":{"txErrors":"0","txRate":"11","name":"em1","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"bond3":{"txErrors":"0","txRate":"14","name":"bond3","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"},"em2":{"txErrors":"0","txRate":"12","name":"em2","state":"up","txDropped":"0","rxRate":"12","rxErrors":"0","rxDropped":"14965","macAddr":"A9:FE:B3:73:B0:E9"}},"memShared":0,"cpuLoad":"19","cpuUser":"14","memCommitted":0,"cpuSys":"14","vmCount":0,"rxDropped":"14965","ksmState":false,"ksmPages":100,"swapFree":20031,"memAvailable":6435,"momStatus":"active","netConfigDirty":"False","memFree":"0","diskStats":{"/var/log/core":{"free":"44231"},"/var/log":{"free":"44231"},"/var/run/vdsm/":{"free":"3978"},"/tmp":{"free":"44231"}},"storageDomains":{},"vmMigrating":0,"cpuIdle":"86","swapTotal":20031,"cpuUserVdsmd":"0.50","numaNodeMemFree":{"0":{"memFree":0,"memPercent":"5"},"1":{"memFree":0,"memPercent":"5"}},"txRate":"","statsAge":"0.43","memUsed":"5","vmActive":0,"ksmCpu":0,"elapsedTime":"1494073920"},"id":"033f17fd-1519-4623-9041-226203d128ba"}
May  6 15:32:00 vdsm-fake journal: 2017-05-06 12:32:00,907
StompCommonClient Message sent: MESSAGE
May  6 15:32:00 vdsm-fake journal: destination:jms.queue.reponses
May  6 15:32:00 vdsm-fake journal: content-length:2040
May  6 15:32:00 vdsm-fake journal:
May  6 15:32:00 vdsm-fake journal: <JsonRpcResponse id:
"033f17fd-1519-4623-9041-226203d128ba" result:
{dateTime=2017-05-06T12:32:00 GMT,
generationID=f554221c-bbe4-4ad1-8cdf-11eff9359359, thpState=always,
cpuSysVdsmd=0.25, anonHugePages=662, rxRate=0.00, txDropped=0,
network={bond0={txErrors=0, txRate=12, name=bond0, state=up, txDropped=0,
rxRate=11, rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9},
bond2={txErrors=0, txRate=10, name=bond2, state=up, txDropped=0, rxRate=13,
rxErrors=0, rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond1={txErrors=0,
txRate=10, name=bond1, state=up, txDropped=0, rxRate=10, rxErrors=0,
rxDropped=14965, macAddr=A9:FE:B3:73:B0:E9}, bond4={txErrors=0, txRate=13,
name=bond4, state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em1={txErrors=0, txRate=11, name=em1, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, bond3={txErrors=0, txRate=14, name=bond3,
state=up, txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}, em2={txErrors=0, txRate=12, name=em2, state=up,
txDropped=0, rxRate=12, rxErrors=0, rxDropped=14965,
macAddr=A9:FE:B3:73:B0:E9}}, memShared=0, cpuLoad=19, cpuUser=14,
memCommitted=0, cpuSys=14, vmCount=0, rxDropped=14965, ksmState=false,
ksmPages=100, swapFree=20031, memAvailable=6435, momStatus=active,
netConfigDirty=False, memFree=0, diskStats={/var/log/core={free=44231},
/var/log={free=44231}, /var/run/vdsm/={free=3978}, /tmp={free=44231}},
storageDomains={}, vmMigrating=0, cpuIdle=86, swapTotal=20031,
cpuUserVdsmd=0.50, numaNodeMemFree={0={memFree=0, memPercent=5},
1={memFree=0, memPercent=5}}, txRate=, statsAge=0.43, memUsed=5,
vmActive=0, ksmCpu=0, elapsedTime=1494073920}>
^C
-- 
Niyazi Elvan
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            4
                            
                          
                          
                            
    
                          
                        
                    
                    
                        I'm utilizing the latest node ISO
(ovirt-node-ng-installer-ovirt-4.1-2017050204.iso), upon installing on a
system I utilize the Web Cockpit UI to install the 4.1 hosted engine
(successfully), resulting in the following engine version:
4.1.1.8-1.el7.centos.
The node is also running 4.1.1.1
After installing the engine and accessing the Engine Web Admin I'm
encountering the following issues:
   - I'm not seeing any storage domains in the console (though the engine
   is installed using NFS).
   - Attempts to import the NFS storage domain that the engine resides in
   (for other VMs to also reside in) fail due to inability to find host ID.
      - Attempts to add (import) the storage domain again once again fails,
      though this time it states that the Storage connection already
exists (yet
      it isn't listed in the storage domains).
   - The hosted engine doesn't show up in the console, though Bug 1269768
   states that was supposed to be corrected in 3.6.1
   - Installing (from the same ISO) subsequent hosts and then using the
   Engine to bring the node in to the cluster, the additional nodes don't have
   the ability to run the hosted engine (based on the lack of the crown icon),
   preventing the original node from entering maintenance mode as the engine
   (which can't be seen) isn't able to migrate to a different node.
I'm not sure where to begin reviewing/troubleshooting based on these
items.  Any guidance would be greatly appreciated.
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
I'm running ovirt-4.0.6 on EL7.3.
I've got a stuck VM (Windows 10) that I'm trying to restart.
Unfortunately it's "up" enough that ovirt is trying to perform a
soft-shutdown, but it's not up enough to complete the task.  If I login
to the ovirt admin portal as the admin, select shutdown, I see the
following engine log messages before I get an event "Shutdown of VM
<vm-name> failed.":
2017-05-04 11:23:27,774 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-14) [3f629ae6] Correlation ID: 3f629ae6, Job ID: 1f5275cd-d2fe-44d5-a1cf-d1920e3f3378, Call Stack: null, Custom Event ID: -1, Message: VM shutdown initiated by admin@internal-authz on VM win10-64 (Host: ovirt-0).
2017-05-04 11:28:38,237 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler10) [4cf2f258] VM 'c9830039-9bf9-4c6a-8eae-da9c24aad899'(win10-64) moved from 'PoweringDown' --> 'Up'
2017-05-04 11:28:38,287 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler10) [4cf2f258] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Shutdown of VM win10-64 failed.
So, how can I get out of this situation?  How can I force-powerdown this
VM so I can reboot it?
Thanks,
-derek
-- 
       Derek Atkins                 617-623-3745
       derek(a)ihtfp.com             www.ihtfp.com
       Computer and Internet Security Consultant
                    
                  
                  
                          
                            
                            5
                            
                          
                          
                            
                            13
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Collectd-disk and -write_http require an older version of collectd than offered by the repos (7.2.0-2 instead of 7.2.1-2 as offered by repo).
Will there be updated versions of collectd-disk and -write_http?
To reproduce try # yum -y install ovirt-engine
on a fully updated Centos 7.3 1704
https://buildlogs.centos.org/rolling/7/isos/x86_64/CentOS-7-x86_64-Minimal-…
___________________________________________________________
Oliver Dietzel
 
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            3
                            
                          
                          
                            
    
                          
                        
                    
                        
                            
                                
                            
                            Re: [ovirt-users] New oVirt Node 4.0.6.1 network problem with bond and many vlans - Network.service timeout failed
                        
                        
by Sandro Bonazzola 08 May '17
                    by Sandro Bonazzola 08 May '17
08 May '17
                    
                        Il 04/Mag/2017 20:02, "Rogério Ceni Coelho" <rogeriocenicoelho(a)gmail.com>
ha scritto:
Hi oVirt admins,
Yesterday, i install a new node 4.0.6.1
I would suggest to go with 4.1.1 nodes since 4.0 is not supported anymore.
and after config network bond (1) + vlans ( about 50 ), network.service
failed with timeout as above. All my old and working fine oVirt nodes run
4.0.5 and have the same 5 min timeout on systemd.
When i try to run vms on this new server, some networks are ok and some not.
I install oVirt Node with 4.0.3 on same server and this problem do not
occur.
How can i workaround or change timeout to network.service ?
In time, there are any way to update oVirt Node to 4.0.5 from 4.0.3 install
iso ?
[root@prd-rbs-ovirt-kvm21-poa ~]# systemctl status network.service
â— network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: failed (Result: timeout) since Thu 2017-05-04 11:29:43 BRT; 3h
12min ago
     Docs: man:systemd-sysv-generator(8)
  Process: 7574 ExecStart=/etc/rc.d/init.d/network start (code=killed,
signal=TERM)
May 04 11:29:12 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[7574]: Bringing
up interface o_p_s_a_lb_758:  [  OK  ]
May 04 11:29:18 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[7574]: Bringing
up interface o_p_sites_a_755:  [  OK  ]
May 04 11:29:23 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[7574]: Bringing
up interface o_prod_app_757:  [  OK  ]
May 04 11:29:28 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[7574]: Bringing
up interface o_prod_db_756:  [  OK  ]
May 04 11:29:38 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[7574]: Bringing
up interface ovirtmgmt:  [  OK  ]
*May 04 11:29:43 prd-rbs-ovirt-kvm21-poa.rbs.com.br
<http://prd-rbs-ovirt-kvm21-poa.rbs.com.br> systemd[1]: network.service
start operation timed out. Terminating.*
May 04 11:29:43 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]: Failed to
start LSB: Bring up/down networking.
May 04 11:29:43 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]: Unit
network.service entered failed state.
May 04 11:29:43 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]:
network.service failed.
May 04 11:29:43 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[7574]: Bringing
up interface paas_1305:
[root@prd-rbs-ovirt-kvm21-poa ~]#
[root@prd-rbs-ovirt-kvm21-poa ~]# systemctl restart network.service
Job for network.service failed because a timeout was exceeded. See
"systemctl status network.service" and "journalctl -xe" for details.
[root@prd-rbs-ovirt-kvm21-poa ~]# systemctl status network.service
â— network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: failed (Result: timeout) since Thu 2017-05-04 14:55:34 BRT; 4min
46s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 40810 ExecStart=/etc/rc.d/init.d/network start (code=killed,
signal=TERM)
May 04 14:55:21 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[40810]: [  OK  ]
May 04 14:55:23 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[40810]: Bringing
up interface o_hlg_db_766:  RTNETLINK answers: File exists
May 04 14:55:26 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[40810]: [  OK  ]
May 04 14:55:28 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[40810]: Bringing
up interface o_p_s_a_lb_758:  RTNETLINK answers: File exists
May 04 14:55:31 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[40810]: [  OK  ]
May 04 14:55:33 prd-rbs-ovirt-kvm21-poa.rbs.com.br network[40810]: Bringing
up interface o_p_sites_a_755:  RTNETLINK answers: File exists
May 04 14:55:34 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]:
network.service start operation timed out. Terminating.
May 04 14:55:34 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]: Failed to
start LSB: Bring up/down networking.
May 04 14:55:34 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]: Unit
network.service entered failed state.
May 04 14:55:34 prd-rbs-ovirt-kvm21-poa.rbs.com.br systemd[1]:
network.service failed.
[root@prd-rbs-ovirt-kvm21-poa ~]#
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            2
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi, while trying to update a VM disk, a failure was returned (forcing me to
add a new disk)
Any advice on how to resolve this error?
Thanks
Installation info:
ovirt-release35-006-1.noarch
libgovirt-0.3.3-1.el7_2.1.x86_64
vdsm-4.16.30-0.el7.centos.x86_64
vdsm-xmlrpc-4.16.30-0.el7.centos.noarch
vdsm-yajsonrpc-4.16.30-0.el7.centos.noarch
vdsm-jsonrpc-4.16.30-0.el7.centos.noarch
vdsm-python-zombiereaper-4.16.30-0.el7.centos.noarch
vdsm-python-4.16.30-0.el7.centos.noarch
vdsm-cli-4.16.30-0.el7.centos.noarch
qemu-kvm-ev-2.3.0-29.1.el7.x86_64
qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64
qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64
libvirt-client-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.3.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.3.x86_64
libvirt-lock-sanlock-1.2.17-13.el7_2.3.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.3.x86_64
libvirt-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.3.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.3.x86_64
-------- engine.log
2017-05-02 09:48:26,505 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
(ajp--127.0.0.1-8702-6) [c3d7125] Lock Acquired to object EngineLock
[exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
VM_DISK_BOOT
key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
, sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
]
2017-05-02 09:48:26,515 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
(ajp--127.0.0.1-8702-6) [c3d7125] Running command: UpdateVmDiskCommand
internal: false. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2017-05-02 09:48:26,562 INFO
[org.ovirt.engine.core.bll.ExtendImageSizeCommand]
(ajp--127.0.0.1-8702-6) [ae718d8] Running command: ExtendImageSizeCommand
internal: true. Entities affected :  ID: c5fb9190-d059-4d9b-af23-07618ff660ce
Type: DiskAction group EDIT_DISK_PROPERTIES with role type USER
2017-05-02 09:48:26,565 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ExtendImageSizeVDSCommand]
(ajp--127.0.0.1-8702-6) [ae718d8] START, ExtendImageSizeVDSCommand(
storagePoolId = 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit =
false), log id: 52aac743
2017-05-02 09:48:26,604 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ExtendImageSizeVDSCommand]
(ajp--127.0.0.1-8702-6) [ae718d8] FINISH, ExtendImageSizeVDSCommand, log
id: 52aac743
2017-05-02 09:48:26,650 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(ajp--127.0.0.1-8702-6) [ae718d8] CommandAsyncTask::Adding
CommandMultiAsyncTasks object for command cb7958d9-6eae-44a9-891a-
7fe088a79df8
2017-05-02 09:48:26,651 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(ajp--127.0.0.1-8702-6) [ae718d8] CommandMultiAsyncTasks::AttachTask:
Attaching task 769a4b18-182b-4048-bb34-a276a55ccbff to command
cb7958d9-6eae-44a9-891a-7fe088a79df8.
2017-05-02 09:48:26,661 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(ajp--127.0.0.1-8702-6) [ae718d8] Adding task
769a4b18-182b-4048-bb34-a276a55ccbff
(Parent Command UpdateVmDisk, Parameters Type org.ovirt.engine.core.common.
asynctasks.AsyncTaskParameters), polling hasn't started yet..
2017-05-02 09:48:26,673 INFO  [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-6)
[ae718d8] Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1,
Message: VM sysinfo-73 sysinfo-73_Disk3 disk was updated by admin@internal.
2017-05-02 09:48:26,674 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(ajp--127.0.0.1-8702-6) [ae718d8] BaseAsyncTask::startPollingTask: Starting
to poll task 769a4b18-182b-4048-bb34-a276a55ccbff.
2017-05-02 09:48:28,430 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] Polling and updating Async
Tasks: 1 tasks, 1 tasks to poll now
2017-05-02 09:48:28,435 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.
HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-48)
[36cd2f7] Failed in HSMGetAllTasksStatusesVDS method
2017-05-02 09:48:28,436 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] SPMAsyncTask::PollTask:
Polling task 769a4b18-182b-4048-bb34-a276a55ccbff (Parent Command
UpdateVmDisk, Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters)
returned status finished, result 'cleanSuccess'.
2017-05-02 09:48:28,446 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] BaseAsyncTask::logEndTaskFailure:
Task 769a4b18-182b-4048-bb34-a276a55ccbff (Parent Command UpdateVmDisk,
Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters)
ended with failure:
-- Result: cleanSuccess
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Could not acquire resource. Probably
resource factory threw an exception.: (), code = 100,
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = Could not acquire resource. Probably
resource factory threw an exception.: (), code = 100
2017-05-02 09:48:28,448 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-48) [36cd2f7]
CommandAsyncTask::endActionIfNecessary:
Al
l tasks of command cb7958d9-6eae-44a9-891a-7fe088a79df8 has ended ->
executing endAction
2017-05-02 09:48:28,449 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler_Worker-48) [36cd2f7] CommandAsyncTask::endAction:
Ending action
 for 1 tasks (command ID: cb7958d9-6eae-44a9-891a-7fe088a79df8): calling
endAction .
2017-05-02 09:48:28,450 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-6) [36cd2f7] CommandAsyncTask::endCommandAction
[within
 thread] context: Attempting to endAction UpdateVmDisk, executionIndex: 0
2017-05-02 09:48:28,501 ERROR [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
(org.ovirt.thread.pool-8-thread-6) [c3d7125] Ending command with failure:
org.ovirt.engine
.core.bll.UpdateVmDiskCommand
2017-05-02 09:48:28,502 ERROR
[org.ovirt.engine.core.bll.ExtendImageSizeCommand]
(org.ovirt.thread.pool-8-thread-6) Ending command with failure:
org.ovirt.engine.core.b
ll.ExtendImageSizeCommand
2017-05-02 09:48:28,504 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-6) START, GetImageInfoVDSCommand( stora
gePoolId = 715d1ba2-eabe-48db-9aea-c28c30359808, ignoreFailoverLimit =
false, storageDomainId = 6a386652-629d-4045-835b-21d2f5c104aa, imageGroupId
= c5fb9190-d059-4d9b-
af23-07618ff660ce, imageId = 562f5b50-1d1a-457c-8b5f-cd8020611550), log id:
674a2581
2017-05-02 09:48:28,511 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-6) IrsBroker::getImageInfo::Failed gett
ing image info imageId = 562f5b50-1d1a-457c-8b5f-cd8020611550 does not
exist on domainName = VM01 , domainId = 6a386652-629d-4045-835b-21d2f5c104aa,
 error code: ImagePathError, message: Image path does not exist or cannot
be accessed/created: (u'/rhev/data-center/mnt/blockSD/6a386652-629d-4045-
835b-21d2f5c104aa/images/c5fb9190-d059-4d9b-af23-07618ff660ce',)
2017-05-02 09:48:28,512 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-6) Command org.ovirt.engine.core.
vdsbroker.irsbroker.GetImageInfoVDSCommand return value
OneImageInfoReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=254,
mMessage=Image path does not exist or cannot be accessed/created:
(u'/rhev/data-center/mnt/blockSD/6a386652-629d-4045-
835b-21d2f5c104aa/images/c5fb9190-d059-4d9b-af23-07618ff660ce',)]]
2017-05-02 09:48:28,512 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(org.ovirt.thread.pool-8-thread-6) FINISH, GetImageInfoVDSCommand, log id:
674a2581
2017-05-02 09:48:28,533 ERROR [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-6)
Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:
Failed to extend size of the disk 'sysinfo-73_Disk3' to 20 GB, User:
admin@internal.
2017-05-02 09:48:28,534 INFO  [org.ovirt.engine.core.bll.UpdateVmDiskCommand]
(org.ovirt.thread.pool-8-thread-6) Lock freed to object EngineLock
[exclusiveLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value:
VM_DISK_BOOT
key: c5fb9190-d059-4d9b-af23-07618ff660ce value: DISK
, sharedLocks= key: 25c0bcc0-0d3d-4ddc-b103-24ed2ac5aa05 value: VM
]
2017-05-02 09:48:28,541 INFO  [org.ovirt.engine.core.dal.
dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-6)
Correlation ID: c3d7125, Call Stack: null, Custom Event ID: -1, Message: VM
sysinfo-73 sysinfo-73_Disk3 disk was updated by admin@internal.
2017-05-02 09:48:28,541 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-6) CommandAsyncTask::HandleEndActionResult
[within thread]: endAction for action type UpdateVmDisk completed, handling
the result.
2017-05-02 09:48:28,541 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-8-thread-6) CommandAsyncTask::HandleEndActionResult
[within thread]: endAction for action type UpdateVmDisk succeeded, clearing
tasks.
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            3
                            
                          
                          
                            
    
                          
                        
                    07 May '17
                    
                        Hi,
I am trying to setup ovirt as shown below.
DataCenter1 ---------Cluster1 -----host1 ---- NFS1-Data Domain
                      |
                      |-----Cluster2 ------host2-----NFS2-Data Domain
As you see in above case I am trying to attach both NFS partition to
Datacente1 and I got various behaviour:
1> Once one data domain attached to data center other is failing to
attached
(e.g. let say NFS1-Data Domain is attached to DataCenter1 then after that
NFS2-Data Domain failing to attach DataCenter1)
2> And sometime both data domain successfully got attached but after that
host2 is not coming up by saying "host1 cannot access to NFS2-Data Domain
and hence moving host1 to Non-Operational"
host1 firewall rules are :
--------------------------------------------------------------------------------------------------
# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere             state
RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:54321
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:54322
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  --  anywhere             anywhere             udp dpt:snmp
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:websm
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:16514
ACCEPT     tcp  --  anywhere             anywhere             multiport
dports rockwell-csp2
ACCEPT     tcp  --  anywhere             anywhere             multiport
dports rfb:6923
ACCEPT     tcp  --  anywhere             anywhere             multiport
dports 49152:49216
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:sunrpc
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:nfs
ACCEPT     udp  --  anywhere             anywhere             udp dpt:nfs
ACCEPT     udp  --  anywhere             anywhere             udp dpt:sunrpc
ACCEPT     udp  --  anywhere             anywhere             udp dpt:892
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:892
ACCEPT     tcp  --  anywhere             anywhere             tcp
dpt:ospf-lite
REJECT     all  --  anywhere             anywhere             reject-with
icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
REJECT     all  --  anywhere             anywhere             PHYSDEV match
! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
--------------------------------------------------------------------------------------------------
Please help me to understand this issue.
1> Do ovirt support multiple-cluster to multiple-storage data domain ?
2> Are those firewall rules correct or wrong ?
3> Can host1 access storage domain created on another cluster say host2 m/c
? (NFS technology)
Thanks,
~Rohit
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
Having an issue a template from a VM's.
I am getting the following errors: 
engine.log 
2017-05-02 09:40:10,059-05 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(default task-24) [2ef2fce] EVENT_ID: USER_ADD_VM_TEMPLATE(48), Correlation 
ID: 176fdfa7-0467-48f5-9ecc-901e86768c28, Job ID: 
fed9a848-1971-495a-a391-be1fa4a67908, Call Stack: null, Custom Event ID: -1, 
Message: Creation of Template test from VM Windows-10-Template was initiated 
by admin@internal-authz.
2017-05-02 09:48:43,059-05 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(org.ovirt.thread.pool-7-thread-20) [] EVENT_ID: 
USER_ADD_VM_TEMPLATE_FINISHED_FAILURE(52), Correlation ID: 
176fdfa7-0467-48f5-9ecc-901e86768c28, Job ID: 
fed9a848-1971-495a-a391-be1fa4a67908, Call Stack: null, Custom Event ID: -1, 
Message: Failed to complete creation of Template test from VM 
Windows-10-Template.
Events Log
ID 10803 - VDSM command DeleteImageGroupVDS failed: Image does not exist in 
domain: u'image=6aa525ad-e7f1-432b-959a-2223f7e77083, 
domain=e371d380-7194-4950-b901-5f2aed5dfb35'
ID 10802 - VDSM vm-host-colo-2 command HSMGetAllTasksStatusesVDS failed: low 
level Image copy failed
                    
                  
                  
                          
                            
                            4
                            
                          
                          
                            
                            7
                            
                          
                          
                            
    
                          
                        
                    
                    
                        Hi,
We have two hypervisors, each one has a 10Gbe nic card with two ports, the
cards support iSCSI offload.
Does oVirt 4.1 support this?
if yes how can someone use it on hosted-engine deployment? the VM engine
will be in the SAN targeted by these cards.
Regards.
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai…>
Virus-free.
www.avast.com
<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campai…>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            4
                            
                          
                          
                            
    
                          
                        
                    
                        
                            
                                
                            
                             New oVirt Node install on oVirt Cluster 4.0.5 - How can I install oVirt Node with same 4.0.5 version ???
                        
                        
by Rogério Ceni Coelho 07 May '17
                    by Rogério Ceni Coelho 07 May '17
07 May '17
                    
                        Hi oVirt Troopers,
I have two segregated oVirt Clusters running 4.0.5 ( DEV and PROD
enviroments).
Now i need to install a new oVirt Node Server (Dell PowerEdge M620), but i
see that does not exist an 4.0.5 iso on
http://resources.ovirt.org/pub/ovirt-4.0/iso/ovirt-node-ng-installer/ ,
only 4.0.3 and 4.0.6.
How can i install this new server and go to the same version of all 20
others ???
There is a way to install 4.0.3 and update to 4.0.5 only ?
Thanks in advance.
                    
                  
                  
                          
                            
                            3
                            
                          
                          
                            
                            4
                            
                          
                          
                            
    
                          
                        
                    
                    
                        I am having issues installing oVirt on a CentOS 7 server. After logging in
as sudo, I run the following series of commands:
# yum -y update
# yum install
http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm
<http://plain.resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm>
# yum -y install ovirt-engine
I then attempted to use the following command and received a “command not
found” error.
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            1
                            
                          
                          
                            
    
                          
                        
                    
                    
                        This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Ck3WecahvXJaUfA9994GcjRewKPgnskLx
Content-Type: multipart/mixed; boundary="R85rSQvBINNVM8UjrG2weAQEIeXGVwVMH";
 protected-headers="v1"
From: Kai Wagner <kwagner(a)suse.com>
To: users(a)ovirt.org
Message-ID: <282bf3c4-662c-c871-a495-cad2e540a311(a)suse.com>
Subject: Change DNS Server
--R85rSQvBINNVM8UjrG2weAQEIeXGVwVMH
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
where can I change the dns server for my hosts? I know the DNS entry is
part of the ifcfg-* file, but I tried to change it there and after a
reboot the old entry was restored from anywhere.
Kai
--=20
SUSE Linux GmbH, GF: Felix Imend=C3=B6rffer, Jane Smithard, Graham Norton=
, HRB 21284 (AG N=C3=BCrnberg)
--R85rSQvBINNVM8UjrG2weAQEIeXGVwVMH--
--Ck3WecahvXJaUfA9994GcjRewKPgnskLx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAEBAgAGBQJY9dKrAAoJEB6MkVYW48RuWuMIAIPFlOXp+ah56n3MlF7t6zJq
nlYYbS8JrIX9vo8T9+J4b0qX0RwR0g0TPxP2IzlmFZXTs3nc4r9dhOprFdS0XWKR
52z9S4Mx8BpWJ12HLAqlo5XjeXH/USS7Rc3oD+2t9ZJzA8/ljJ9pLATa/iGzDOcF
1aSHmB/cc+549xNVaOMpS4g8JJJPszmGUMYd3b2Mhquu2iAhhrLu2ZTBjvYhq6IC
6CM7PrDDVjJhi5T3MocHRamLPj6xkfAPEAKvZ3zV3frlpTptTJ4PmEzLcqZ7Tcm+
LTDq4r7gihudFvQ+oOrQKJFtOroZZ8GJ0pgLP86SrY5zlSXmFGkG3Ate64khEAc=
=Lka2
-----END PGP SIGNATURE-----
--Ck3WecahvXJaUfA9994GcjRewKPgnskLx--
                    
                  
                  
                          
                            
                            2
                            
                          
                          
                            
                            3