Thank you Roy!
BTW - I went to the IRC channel - it was just me, a bot and one other
person who did not respond.
I'll keep an eye on the chat in the future.
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | msteele(a)telvue.com |
On 07/13/2015 01:36 AM, Mark Steele wrote:
Roy - Success!
I was able to restart vdsmd
'service vdsmd restart'
From there - I was able to issue the stop destroy command:
'vdsClient -s 0 destroy 41703d5c-6cdb-42b4-93df-d78be2776e2b'
Once that was done, I was able to REMOVE the VM from the GUI.
Thank you for all your help AND your patience with a noob!
You're welcome. And btw,next time try also the #ovirt irc channel at
oftc.net, you might get quicker response.
***
*Mark Steele*
CIO / VP Technical Operations | TelVue Corporation
TelVue - We Share Your Vision
800.885.8886 x128 | msteele(a)telvue.com | <
http://www.telvue.com/>
http://www.telvue.com
twitter: <
http://twitter.com/telvue>http://twitter.com/telvue |
facebook: <
https://www.facebook.com/telvue>
https://www.facebook.com/telvue
On Sun, Jul 12, 2015 at 3:16 PM, Mark Steele <msteele(a)telvue.com> wrote:
> No joy - the shutdown appears to start - but eventually the VM shows as
> running again. There is NO process running that shows up with ps command.
>
> I'm not sure what to do next
>
>
>
> ***
> *Mark Steele*
> CIO / VP Technical Operations | TelVue Corporation
> TelVue - We Share Your Vision
> 800.885.8886 x128 | msteele(a)telvue.com | <
http://www.telvue.com/>
>
http://www.telvue.com
> twitter: <
http://twitter.com/telvue>http://twitter.com/telvue |
> facebook: <
https://www.facebook.com/telvue>
>
https://www.facebook.com/telvue
>
> On Sun, Jul 12, 2015 at 10:13 AM, Artyom Lukianov < <alukiano(a)redhat.com>
> alukiano(a)redhat.com> wrote:
>
>> Also provide to us vdsm log from host(/var/log/vdsm/vdsm.log) for future
>> investigation.
>> Thanks
>>
>> ----- Original Message -----
>> From: "Mark Steele" <
<msteele@telvue.com>msteele(a)telvue.com>
>> To: "Roy Golan" < <rgolan@redhat.com>rgolan(a)redhat.com>
>> Cc: "Artyom Lukianov" <
<alukiano@redhat.com>alukiano(a)redhat.com>,
>> <users@ovirt.org>users(a)ovirt.org
>> Sent: Sunday, July 12, 2015 4:21:34 PM
>> Subject: Re: [ovirt-users] This VM is not managed by the engine
>>
>> [root@hv-02 etc]# vdsClient -s 0 destroy
>> 41703d5c-6cdb-42b4-93df-d78be2776e2b
>>
>> Unexpected exception
>>
>> Not sure I'm getting any closer :-)
>>
>>
>>
>>
>> ***
>> *Mark Steele*
>> CIO / VP Technical Operations | TelVue Corporation
>> TelVue - We Share Your Vision
>> 800.885.8886 x128 <800.885.8886%20x128> | msteele(a)telvue.com |
>> <
http://www.telvue.com>http://www.telvue.com
>> twitter:
http://twitter.com/telvue | facebook:
>>
https://www.facebook.com/telvue
>>
>> On Sun, Jul 12, 2015 at 9:13 AM, Mark Steele <msteele(a)telvue.com>
>> wrote:
>>
>> > OK - I think I'm getting closer - now that I'm on the correct box.
>> >
>> > here is the output of the vdsClient command - which is the device id -
>> the
>> > first line?
>> >
>> > [root@hv-02 etc]# vdsClient -s 0 list
>> >
>> > 41703d5c-6cdb-42b4-93df-d78be2776e2b
>> > Status = Up
>> > acpiEnable = true
>> > emulatedMachine = rhel6.5.0
>> > afterMigrationStatus =
>> > pid = 27304
>> > memGuaranteedSize = 2048
>> > transparentHugePages = true
>> > displaySecurePort = 5902
>> > spiceSslCipherSuite = DEFAULT
>> > cpuType = SandyBridge
>> > smp = 2
>> > numaTune = {'nodeset': '0,1', 'mode':
'interleave'}
>> > custom =
>> >
>>
{'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6device_ebd4c73d-12c4-435e-8cc5-f180d8f20a72':
>> > 'VmDevice {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
>> > deviceId=ebd4c73d-12c4-435e-8cc5-f180d8f20a72, device=unix,
>> type=CHANNEL,
>> > bootOrder=0, specParams={}, address={bus=0, controller=0,
>> > type=virtio-serial, port=2}, managed=false, plugged=true,
>> readOnly=false,
>> > deviceAlias=channel1, customProperties={}, snapshotId=null}',
>> >
>>
'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6device_ebd4c73d-12c4-435e-8cc5-f180d8f20a72device_ffd2796f-7644-4008-b920-5f0970b0ef0e':
>> > 'VmDevice {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
>> > deviceId=ffd2796f-7644-4008-b920-5f0970b0ef0e, device=unix,
>> type=CHANNEL,
>> > bootOrder=0, specParams={}, address={bus=0, controller=0,
>> > type=virtio-serial, port=1}, managed=false, plugged=true,
>> readOnly=false,
>> > deviceAlias=channel0, customProperties={}, snapshotId=null}',
>> > 'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6': 'VmDevice
>> > {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
>> > deviceId=86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6, device=ide,
>> type=CONTROLLER,
>> > bootOrder=0, specParams={}, address={slot=0x01, bus=0x00,
>> domain=0x0000,
>> > type=pci, function=0x1}, managed=false, plugged=true, readOnly=false,
>> > deviceAlias=ide0, customProperties={}, snapshotId=null}',
>> >
>>
'device_86f1aa5a-aa3f-4e47-b546-aafcc86fcbb6device_ebd4c73d-12c4-435e-8cc5-f180d8f20a72device_ffd2796f-7644-4008-b920-5f0970b0ef0edevice_6693d023-9c1f-433c-870e-e9771be8474b':
>> > 'VmDevice {vmId=41703d5c-6cdb-42b4-93df-d78be2776e2b,
>> > deviceId=6693d023-9c1f-433c-870e-e9771be8474b, device=spicevmc,
>> > type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0,
>> > type=virtio-serial, port=3}, managed=false, plugged=true,
>> readOnly=false,
>> > deviceAlias=channel2, customProperties={}, snapshotId=null}'}
>> > vmType = kvm
>> > memSize = 2048
>> > smpCoresPerSocket = 1
>> > vmName = connect-turbo-stage-03
>> > nice = 0
>> > bootMenuEnable = false
>> > copyPasteEnable = true
>> > displayIp = 10.1.90.161
>> > displayPort = -1
>> > smartcardEnable = false
>> > clientIp =
>> > fileTransferEnable = true
>> > nicModel = rtl8139,pv
>> > keyboardLayout = en-us
>> > kvmEnable = true
>> > pitReinjection = false
>> > displayNetwork = ovirtmgmt
>> > devices = [{'target': 2097152, 'specParams':
{'model': 'none'},
>> 'alias':
>> > 'balloon0', 'deviceType': 'balloon',
'device': 'memballoon', 'type':
>> > 'balloon'}, {'device': 'unix', 'alias':
'channel0', 'address': {'bus':
>> '0',
>> > 'controller': '0', 'type': 'virtio-serial',
'port': '1'}, 'deviceType':
>> > 'channel', 'type': 'channel'}, {'device':
'unix', 'alias': 'channel1',
>> > 'address': {'bus': '0', 'controller':
'0', 'type': 'virtio-serial',
>> 'port':
>> > '2'}, 'deviceType': 'channel', 'type':
'channel'}, {'device':
>> 'spicevmc',
>> > 'alias': 'channel2', 'address': {'bus':
'0', 'controller': '0', 'type':
>> > 'virtio-serial', 'port': '3'}, 'deviceType':
'channel', 'type':
>> 'channel'},
>> > {'index': '0', 'alias': 'scsi0',
'specParams': {}, 'deviceType':
>> > 'controller', 'deviceId':
'88db8cb9-0960-4797-bd41-1694bf14b8a9',
>> > 'address': {'slot': '0x04', 'bus':
'0x00', 'domain': '0x0000', 'type':
>> > 'pci', 'function': '0x0'}, 'device':
'scsi', 'model': 'virtio-scsi',
>> > 'type': 'controller'}, {'alias':
'virtio-serial0', 'specParams': {},
>> > 'deviceType': 'controller', 'deviceId':
>> > '4bb9c112-e027-4e7d-8b1c-32f99c7040ee', 'address':
{'slot': '0x05',
>> 'bus':
>> > '0x00', 'domain': '0x0000', 'type':
'pci', 'function': '0x0'},
>> 'device':
>> > 'virtio-serial', 'type': 'controller'},
{'device': 'usb', 'alias':
>> 'usb0',
>> > 'address': {'slot': '0x01', 'bus':
'0x00', 'domain': '0x0000', 'type':
>> > 'pci', 'function': '0x2'}, 'deviceType':
'controller', 'type':
>> > 'controller'}, {'device': 'ide', 'alias':
'ide0', 'address': {'slot':
>> > '0x01', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci', 'function':
>> > '0x1'}, 'deviceType': 'controller', 'type':
'controller'}, {'alias':
>> > 'video0', 'specParams': {'vram': '32768',
'ram': '65536', 'heads':
>> '1'},
>> > 'deviceType': 'video', 'deviceId':
>> '23634541-3b7e-460d-9580-39504392ba36',
>> > 'address': {'slot': '0x02', 'bus':
'0x00', 'domain': '0x0000', 'type':
>> > 'pci', 'function': '0x0'}, 'device':
'qxl', 'type': 'video'},
>> {'device':
>> > 'spice', 'specParams': {'displayNetwork':
'ovirtmgmt',
>> > 'spiceSecureChannels':
>> >
>> 'smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard',
>> > 'keyMap': 'en-us', 'displayIp':
'10.1.90.161', 'copyPasteEnable':
>> 'true'},
>> > 'deviceType': 'graphics', 'tlsPort': '5902',
'type': 'graphics'},
>> > {'nicModel': 'pv', 'macAddr':
'00:01:a4:a2:b4:30', 'linkActive': True,
>> > 'network': 'ovirtmgmt', 'alias': 'net0',
'filter':
>> 'vdsm-no-mac-spoofing',
>> > 'specParams': {'inbound': {}, 'outbound': {}},
'deviceType':
>> 'interface',
>> > 'deviceId': '63651662-2ddf-4611-b988-1a58d05982f6',
'address': {'slot':
>> > '0x03', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci', 'function':
>> > '0x0'}, 'device': 'bridge', 'type':
'interface', 'name': 'vnet5'},
>> > {'nicModel': 'pv', 'macAddr':
'00:01:a4:a2:b4:31', 'linkActive': True,
>> > 'network': 'storage', 'alias': 'net1',
'filter':
>> 'vdsm-no-mac-spoofing',
>> > 'specParams': {'inbound': {}, 'outbound': {}},
'deviceType':
>> 'interface',
>> > 'deviceId': 'b112d9c6-5144-4b67-912b-dcc27aabdae9',
'address': {'slot':
>> > '0x07', 'bus': '0x00', 'domain':
'0x0000', 'type': 'pci', 'function':
>> > '0x0'}, 'device': 'bridge', 'type':
'interface', 'name': 'vnet6'},
>> > {'index': '3', 'iface': 'ide',
'name': 'hdd', 'alias': 'ide0-1-1',
>> > 'specParams': {'vmPayload': {'volId':
'config-2', 'file':
>> > {'openstack/latest/meta_data.json':
>> >
>>
'ewogICJsYXVuY2hfaW5kZXgiIDogIjAiLAogICJhdmFpbGFiaWxpdHlfem9uZSIgOiAibm92YSIs\nCiAgIm5hbWUiIDogImNvbm5lY3QtdHVyYm8tc3RhZ2UtMDMiLAogICJob3N0bmFtZSIgOiAiY29u\nbmVjdC10dXJiby1zdGFnZS0wMyIsCiAgInV1aWQiIDogImJiNmIwMzdhLTZkY2ItNGZmZS04MjUw\nLTMwYjlkOWE0ZTlmZCIsCiAgIm1ldGEiIDogewogICAgImVzc2VudGlhbCIgOiAiZmFsc2UiLAog\nICAgInJvbGUiIDogInNlcnZlciIsCiAgICAiZHNtb2RlIiA6ICJsb2NhbCIKICB9Cn0=\n',
>> > 'openstack/latest/user_data':
>> >
>>
'I2Nsb3VkLWNvbmZpZwpzc2hfcHdhdXRoOiB0cnVlCmRpc2FibGVfcm9vdDogMApvdXRwdXQ6CiAg\nYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwpjaHBhc3N3ZDoKICBleHBp\ncmU6IGZhbHNlCnJ1bmNtZDoKLSAnc2VkIC1pICcnL15kYXRhc291cmNlX2xpc3Q6IC9kJycgL2V0\nYy9jbG91ZC9jbG91ZC5jZmc7IGVjaG8gJydkYXRhc291cmNlX2xpc3Q6CiAgWyJOb0Nsb3VkIiwg\nIkNvbmZpZ0RyaXZlIl0nJyA+PiAvZXRjL2Nsb3VkL2Nsb3VkLmNmZycK\n'}}},
>> > 'readonly': 'True', 'deviceType': 'disk',
'deviceId':
>> > '6de890b2-6454-4377-9a71-bea2e46d50a8', 'address':
{'bus': '1',
>> > 'controller': '0', 'type': 'drive',
'target': '0', 'unit': '1'},
>> 'device':
>> > 'cdrom', 'shared': 'false', 'path':
'', 'type': 'disk'}, {'index': '2',
>> > 'iface': 'ide', 'name': 'hdd',
'alias': 'ide0-1-1', 'specParams':
>> {'path':
>> > ''}, 'readonly': 'True', 'deviceType':
'disk', 'deviceId':
>> > '2763a41b-6576-4135-b349-4fe402c31246', 'address':
{'bus': '1',
>> > 'controller': '0', 'type': 'drive',
'target': '0', 'unit': '1'},
>> 'device':
>> > 'cdrom', 'shared': 'false', 'path':
'', 'type': 'disk'}, {'device':
>> 'file',
>> > 'alias': 'ide0-1-0', 'address': {'bus':
'1', 'controller': '0', 'type':
>> > 'drive', 'target': '0', 'unit':
'0'}, 'deviceType': 'disk', 'type':
>> 'disk'}]
>> > timeOffset = -891891
>> > maxVCpus = 16
>> > spiceSecureChannels =
>> > smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard
>> > display = qxl
>> > [root@hv-02 etc]#
>> >
>> >
>> > ***
>> > *Mark Steele*
>> > CIO / VP Technical Operations | TelVue Corporation
>> > TelVue - We Share Your Vision
>> > 800.885.8886 x128 <800.885.8886%20x128> | msteele(a)telvue.com |
>> <
http://www.telvue.com>http://www.telvue.com
>> > twitter:
http://twitter.com/telvue | facebook:
>> >
https://www.facebook.com/telvue
>> >
>> > On Sun, Jul 12, 2015 at 9:06 AM, Mark Steele <msteele(a)telvue.com>
>> wrote:
>> >
>> >> I think I may have not given you all the information.
>> >>
>> >> I am not logging into the host - I am logging into the ovirt
>> management.
>> >>
>> >> Let me try logging into the host and checking
>> >>
>> >>
>> >>
>> >> ***
>> >> *Mark Steele*
>> >> CIO / VP Technical Operations | TelVue Corporation
>> >> TelVue - We Share Your Vision
>> >> 800.885.8886 x128 <800.885.8886%20x128> | msteele(a)telvue.com |
>> <
http://www.telvue.com>http://www.telvue.com
>> >> twitter:
http://twitter.com/telvue | facebook:
>> >>
https://www.facebook.com/telvue
>> >>
>> >> On Sun, Jul 12, 2015 at 9:03 AM, Roy Golan <rgolan(a)redhat.com>
wrote:
>> >>
>> >>> On 07/12/2015 03:52 PM, Mark Steele wrote:
>> >>>
>> >>> That command returns nothing - I don't think qemu is running?
>> >>>
>> >>> Not sure how to start it on CentOS
>> >>>
>> >>> [root@ovirt-01 ~]# ps -ef | grep qemu
>> >>>
>> >>> root 23279 23130 0 08:51 pts/0 00:00:00 grep qemu
>> >>>
>> >>>
>> >>> that mean you don't have vm running on that host. so you can
>> restart
>> >>> vdsm
>> >>>
>> >>>
>> >>>
>> >>> ***
>> >>> *Mark Steele*
>> >>> CIO / VP Technical Operations | TelVue Corporation
>> >>> TelVue - We Share Your Vision
>> >>> 800.885.8886 x128 <800.885.8886%20x128> | msteele(a)telvue.com |
<
>>
http://www.telvue.com/>
>> >>>
http://www.telvue.com
>> >>> twitter: <
<
http://twitter.com/telvue>http://twitter.com/telvue>
>> <
http://twitter.com/telvue>http://twitter.com/telvue |
>> >>> facebook: < <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue>
>> >>>
https://www.facebook.com/telvue
>> >>>
>> >>> On Sun, Jul 12, 2015 at 8:45 AM, Roy Golan
<rgolan(a)redhat.com>
>> wrote:
>> >>>
>> >>>> On 07/12/2015 03:42 PM, Mark Steele wrote:
>> >>>>
>> >>>> I run into the same issue - I am unable to completely go into
>> >>>> maintenance mode because this VM is still on it - it cannot be
>> migrated
>> >>>> because it is not managed.
>> >>>>
>> >>>> find you qemu process:
>> >>>> pgrep -an qemu-kvm | grep external
>> >>>>
>> >>>> and kill the process
>> >>>>
>> >>>>
>> >>>>
>> >>>> [image: Inline image 1]
>> >>>>
>> >>>>
>> >>>> ***
>> >>>> *Mark Steele*
>> >>>> CIO / VP Technical Operations | TelVue Corporation
>> >>>> TelVue - We Share Your Vision
>> >>>> 800.885.8886 x128 <800.885.8886%20x128> |
<msteele(a)telvue.com>
>> <msteele@telvue.com>msteele(a)telvue.com |
>> >>>> <
http://www.telvue.com> <
http://www.telvue.com>
>>
http://www.telvue.com
>> >>>> twitter: <
<
http://twitter.com/telvue>http://twitter.com/telvue>
>> <
http://twitter.com/telvue>http://twitter.com/telvue |
>> >>>> facebook: < <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue>
>> >>>>
https://www.facebook.com/telvue
>> >>>>
>> >>>> On Sun, Jul 12, 2015 at 8:25 AM, Roy Golan < <
<rgolan(a)redhat.com>
>> rgolan(a)redhat.com>
>> >>>> rgolan(a)redhat.com> wrote:
>> >>>>
>> >>>>> On 07/12/2015 03:12 PM, Mark Steele wrote:
>> >>>>>
>> >>>>> I think I may have found the problem:
>> >>>>>
>> >>>>> [root@ovirt-01 pki]# ls -lah
>> >>>>> total 48K
>> >>>>> drwxr-xr-x. 10 root root 4.0K Nov 14 2014 .
>> >>>>> drwxr-xr-x. 118 root root 12K Jul 12 03:35 ..
>> >>>>> drwxr-xr-x. 6 root root 4.0K Nov 14 2014 CA
>> >>>>> drwxr-xr-x. 4 root root 4.0K Nov 14 2014 ca-trust
>> >>>>> drwxr-xr-x. 2 root root 4.0K Nov 14 2014 java
>> >>>>> drwxr-xr-x. 2 root root 4.0K Jul 12 07:03 nssdb
>> >>>>> drwxr-xr-x. 6 ovirt ovirt 4.0K Nov 19 2014 ovirt-engine
>> >>>>> drwxr-xr-x. 2 root root 4.0K Nov 14 2014 rpm-gpg
>> >>>>> drwx------. 2 root root 4.0K Nov 22 2013 rsyslog
>> >>>>> drwxr-xr-x. 5 root root 4.0K Nov 14 2014 tls
>> >>>>> [root@ovirt-01 pki]#
>> >>>>>
>> >>>>> There is no vsdm directory under /etc/pki
>> >>>>>
>> >>>>> This is an ovirt node. Version of software is 3.5.0.1-1.el6
from
>> the
>> >>>>> ovirt management console.
>> >>>>>
>> >>>>> I'd like to add that I am not the person who originally
installed
>> >>>>> this instance - and am not entirely familiar with how it is
setup
>> and
>> >>>>> installed - so I may ask ignorant questions from time to
time.
>> >>>>>
>> >>>>>
>> >>>>> not urgent but at this point it looks like it would be good
to
>> >>>>> reinstall this host from the webadmin. if you have the
capacity,
>> >>>>> you can put the host to maintenance, that will migrate vms
to other
>> >>>>> hosts, and then choose "reinstall" once its in
"maintenance'
>> >>>>>
>> >>>>>
>> >>>>> ***
>> >>>>> *Mark Steele*
>> >>>>> CIO / VP Technical Operations | TelVue Corporation
>> >>>>> TelVue - We Share Your Vision
>> >>>>> 800.885.8886 x128 <800.885.8886%20x128> |
<msteele(a)telvue.com>
>> <msteele@telvue.com>msteele(a)telvue.com |
>> >>>>> <
http://www.telvue.com> <
http://www.telvue.com>
>>
http://www.telvue.com
>> >>>>> twitter:
<
http://twitter.com/telvue>http://twitter.com/telvue |
>> facebook:
>> >>>>>
<
https://www.facebook.com/telvue>https://www.facebook.com/telvue
>> >>>>>
>> >>>>> On Sun, Jul 12, 2015 at 8:02 AM, Roy Golan < <
<rgolan(a)redhat.com>
>> rgolan(a)redhat.com>
>> >>>>> <rgolan@redhat.com>rgolan(a)redhat.com> wrote:
>> >>>>>
>> >>>>>> On 07/12/2015 02:07 PM, Mark Steele wrote:
>> >>>>>>
>> >>>>>> Thank you Roy,
>> >>>>>>
>> >>>>>> I installed the client but am getting a permissions
error when I
>> >>>>>> run it
>> >>>>>>
>> >>>>>> [root@ovirt-01 ~]# vdsClient -s 0 list
>> >>>>>> Traceback (most recent call last):
>> >>>>>> File "/usr/share/vdsm/vdsClient.py", line
2678, in <module>
>> >>>>>> serv.do_connect(hostPort)
>> >>>>>> File "/usr/share/vdsm/vdsClient.py", line
136, in do_connect
>> >>>>>> self.s = vdscli.connect(hostPort, self.useSSL,
>> self.truststore)
>> >>>>>> File
"/usr/lib/python2.6/site-packages/vdsm/vdscli.py", line
>> 110,
>> >>>>>> in connect
>> >>>>>> raise Exception("No permission to read file:
%s" % f)
>> >>>>>> Exception: No permission to read file:
>> /etc/pki/vdsm/keys/vdsmkey.pem
>> >>>>>>
>> >>>>>>
>> >>>>>> This should work. something isn't right with your
setup
>> >>>>>> is your host an ovirt-node? could be that you hit [1] .
let me
>> know
>> >>>>>> what version are you running.
>> >>>>>>
>> >>>>>> please try the same with user vdsm. it should have
permissions to
>> >>>>>> /etc/pki/vdsm
>> >>>>>>
>> >>>>>> [1] < <
https://gerrit.ovirt.org/#/c/27779/>
>>
https://gerrit.ovirt.org/#/c/27779/>
>> >>>>>> <
https://gerrit.ovirt.org/#/c/27779/>
>>
https://gerrit.ovirt.org/#/c/27779/
>> >>>>>>
>> >>>>>>
>> >>>>>> If I restart vdsm, will that cause any issues with
running VM's
>> on
>> >>>>>> this ovirt installation? This is our production
environment.
>> >>>>>>
>> >>>>>>
>> >>>>>> Generatlly the answer is no but lets avoid it if we
can for
>> this is
>> >>>>>> a minor cosmetic issue I guess.
>> >>>>>>
>> >>>>>> just as FYI - vdsm only reconnects to the socket exposed
by
>> libvirt
>> >>>>>> to control the vm lifecycle. VDSM doesn't manadate
the lifecycle
>> of a VM
>> >>>>>> unless the engine tells is so. Storage wise there could
be some
>> operations
>> >>>>>> but i'm almost sure they must not have effect on
running VMs.
>> >>>>>>
>> >>>>>>
>> >>>>>> Thank you
>> >>>>>>
>> >>>>>>
>> >>>>>> ***
>> >>>>>> *Mark Steele*
>> >>>>>> CIO / VP Technical Operations | TelVue Corporation
>> >>>>>> TelVue - We Share Your Vision
>> >>>>>> 800.885.8886 x128 <800.885.8886%20x128> |
<msteele(a)telvue.com>
>> <msteele@telvue.com>msteele(a)telvue.com |
>> >>>>>> <
<
http://www.telvue.com>http://www.telvue.com>
>> <
http://www.telvue.com>http://www.telvue.com
>> >>>>>> twitter:
<
http://twitter.com/telvue>http://twitter.com/telvue |
>> facebook:
>> >>>>>> < <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue> <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue
>> >>>>>>
>> >>>>>> On Sun, Jul 12, 2015 at 4:09 AM, Roy Golan < <
>> <rgolan@redhat.com>rgolan(a)redhat.com>
>> >>>>>> <rgolan@redhat.com>rgolan(a)redhat.com> wrote:
>> >>>>>>
>> >>>>>>> On 07/09/2015 06:34 PM, Mark Steele wrote:
>> >>>>>>>
>> >>>>>>> Yes,
>> >>>>>>>
>> >>>>>>> It is displayed in the engine:
>> >>>>>>>
>> >>>>>>> [image: Inline image 1]
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> the vdsm on that host reports it back to the engine
. since
>> this vm
>> >>>>>>> isn't in the engine DB it is concidered as
EXTERNAL (thus the
>> error 400
>> >>>>>>> from the API)
>> >>>>>>>
>> >>>>>>> do yo know if the qemu-kvm proccess isn't
running anymore?
>> >>>>>>>
>> >>>>>>> if the process isn't running then vdsm must
clean its cache
>> >>>>>>>
>> >>>>>>> try to:
>> >>>>>>>
>> >>>>>>> yum install vdsm-cli
>> >>>>>>> vdsClient -s 0 list
>> >>>>>>> vdsClient -s 0 destroy {vmId}
>> >>>>>>>
>> >>>>>>> alternativly a vdsm restart will work (if the qemu
proccess isn't
>> >>>>>>> running)
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> The VM is not really running - the IP addresses
that are being
>> >>>>>>> reported are from another VM that was recently
removed. All
>> attempts to
>> >>>>>>> control the VM have failed. It does not have any
NICS or disk
>> associated
>> >>>>>>> with it - so this seems to be a ghost in the
machine. I
>> attempted to unlock
>> >>>>>>> it using the unlock_entity.sh script - it reports
successful,
>> however I
>> >>>>>>> still cannot do anything with the VM.
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> ***
>> >>>>>>> *Mark Steele*
>> >>>>>>> CIO / VP Technical Operations | TelVue Corporation
>> >>>>>>> TelVue - We Share Your Vision
>> >>>>>>> 800.885.8886 x128 <800.885.8886%20x128> |
<msteele(a)telvue.com>
>> <msteele@telvue.com>msteele(a)telvue.com |
>> >>>>>>> <
<
http://www.telvue.com>http://www.telvue.com>
>> <
http://www.telvue.com>http://www.telvue.com
>> >>>>>>> twitter:
<
http://twitter.com/telvue>http://twitter.com/telvue |
>> facebook:
>> >>>>>>> < <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue> <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue
>> >>>>>>>
>> >>>>>>> On Thu, Jul 9, 2015 at 11:11 AM, Artyom Lukianov
<
>> >>>>>>> <
<alukiano@redhat.com>alukiano(a)redhat.com>
>> <alukiano@redhat.com>alukiano(a)redhat.com> wrote:
>> >>>>>>>
>> >>>>>>>> Can you sea via engine, on what host run VM?
>> >>>>>>>> Anyway if you have really run VM on host you can
try to figure
>> it
>> >>>>>>>> with 'ps aux | grep qemu', if it will
return you some process,
>> you can just
>> >>>>>>>> kill process via 'kill pid'.
>> >>>>>>>> I hope it will help you.
>> >>>>>>>>
>> >>>>>>>> ----- Original Message -----
>> >>>>>>>> From: "Mark Steele" < <
<msteele@telvue.com>msteele(a)telvue.com>
>> <msteele@telvue.com>msteele(a)telvue.com>
>> >>>>>>>> To: "Artyom Lukianov" <
<alukiano(a)redhat.com>
>> <alukiano@redhat.com>alukiano(a)redhat.com>
>> >>>>>>>> Cc: <
<users@ovirt.org>users(a)ovirt.org> <users(a)ovirt.org>
>> users(a)ovirt.org
>> >>>>>>>> Sent: Thursday, July 9, 2015 5:42:20 PM
>> >>>>>>>> Subject: Re: [ovirt-users] This VM is not
managed by the engine
>> >>>>>>>>
>> >>>>>>>> Artyom,
>> >>>>>>>>
>> >>>>>>>> Thank you - I don't have vdsClient installed
- can you point me
>> to
>> >>>>>>>> the
>> >>>>>>>> download?
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> ***
>> >>>>>>>> *Mark Steele*
>> >>>>>>>> CIO / VP Technical Operations | TelVue
Corporation
>> >>>>>>>> TelVue - We Share Your Vision
>> >>>>>>>> 800.885.8886 x128 <800.885.8886%20x128>
<800.885.8886%20x128>
>> | < <msteele@telvue.com>msteele(a)telvue.com>
>> >>>>>>>> <msteele@telvue.com>msteele(a)telvue.com |
<
http://www.telvue.com
>> > <
http://www.telvue.com>http://www.telvue.com
>> >>>>>>>> twitter: <
<
http://twitter.com/telvue>http://twitter.com/telvue
>> > <
http://twitter.com/telvue>http://twitter.com/telvue |
>> >>>>>>>> facebook:
>> >>>>>>>> < <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue> <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue
>> >>>>>>>>
>> >>>>>>>> On Thu, Jul 9, 2015 at 10:11 AM, Artyom Lukianov
<
>> >>>>>>>> <
<alukiano@redhat.com>alukiano(a)redhat.com>
>> <alukiano@redhat.com>alukiano(a)redhat.com>
>> >>>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>> > Please check host where VM run(vdsClient -s
0 list table), and
>> >>>>>>>> you can
>> >>>>>>>> > destroy it via vdsClient(vdsClient -s 0
destroy vm_id).
>> >>>>>>>> > Thanks
>> >>>>>>>> >
>> >>>>>>>> > ----- Original Message -----
>> >>>>>>>> > From: "Mark Steele" < <
<msteele(a)telvue.com>
>> msteele(a)telvue.com> <msteele@telvue.com>msteele(a)telvue.com>
>> >>>>>>>> > To: <
<users@ovirt.org>users(a)ovirt.org> <users(a)ovirt.org>
>> users(a)ovirt.org
>> >>>>>>>> > Sent: Thursday, July 9, 2015 4:38:32 PM
>> >>>>>>>> > Subject: [ovirt-users] This VM is not
managed by the engine
>> >>>>>>>> >
>> >>>>>>>> > I have a VM that was not started and is now
showing as
>> running.
>> >>>>>>>> When I
>> >>>>>>>> > attempt to suspend or stop it in the
ovirt-shell, I get the
>> >>>>>>>> message:
>> >>>>>>>> >
>> >>>>>>>> > status: 400
>> >>>>>>>> > reason: bad request
>> >>>>>>>> > detail: Cannot hibernate VM. This VM is not
managed by the
>> engine.
>> >>>>>>>> >
>> >>>>>>>> > Not sure how the VM was initially created
on the ovirt
>> manager.
>> >>>>>>>> This VM is
>> >>>>>>>> > not needed - how can I 'shutdown'
and remove this VM?
>> >>>>>>>> >
>> >>>>>>>> > Thanks
>> >>>>>>>> >
>> >>>>>>>> > ***
>> >>>>>>>> > Mark Steele
>> >>>>>>>> > CIO / VP Technical Operations | TelVue
Corporation
>> >>>>>>>> > TelVue - We Share Your Vision
>> >>>>>>>> > 800.885.8886 x128
<800.885.8886%20x128>
>> <800.885.8886%20x128> | <
<msteele@telvue.com>msteele(a)telvue.com>
>> >>>>>>>> <msteele@telvue.com>msteele(a)telvue.com |
<
http://www.telvue.com
>> > <
http://www.telvue.com>http://www.telvue.com
>> >>>>>>>> > twitter: <
http://twitter.com/telvue>
>> <
http://twitter.com/telvue>http://twitter.com/telvue |
>> >>>>>>>> facebook:
>> >>>>>>>> > <
<
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue> <
https://www.facebook.com/telvue>
>>
https://www.facebook.com/telvue
>> >>>>>>>> >
>> >>>>>>>> >
_______________________________________________
>> >>>>>>>> > Users mailing list
>> >>>>>>>> > <
<Users@ovirt.org>Users(a)ovirt.org> <Users(a)ovirt.org>
>> Users(a)ovirt.org
>> >>>>>>>> > <
<
http://lists.ovirt.org/mailman/listinfo/users>
>>
http://lists.ovirt.org/mailman/listinfo/users>
>> >>>>>>>>
<
http://lists.ovirt.org/mailman/listinfo/users>
>>
http://lists.ovirt.org/mailman/listinfo/users
>> >>>>>>>> >
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> _______________________________________________
>> >>>>>>> Users mailing listUsers@ovirt.orghttp://
>>
lists.ovirt.org/mailman/listinfo/users
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>
>