<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Dec 2, 2016 at 5:03 PM, Cristian Mammoli <span dir="ltr">&lt;<a href="mailto:c.mammoli@apra.it" target="_blank">c.mammoli@apra.it</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Out of desperation i added the missing section int the OVF I extracted and &quot;tarred&quot; it back to the storage domain. The nic now is back...<br></blockquote><div><br></div><div>Unfortunately I&#39;ve to tell you that the engine will periodically rewrite the OVF_STORE based on the configuration of the VM in its DB so in principle also this one is just a temporary solution.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I don&#39;t know if the query you suggested still make sense now... :/<br>
Anyway I don&#39;t have any nic in the gui even now<br>
<br>
[root@ovengine ~]#  sudo -u postgres psql engine -c &quot;select * from vm_device where type=&#39;interface&#39; and vm_id=&#39;497f5e4a-0c76-441a-b72e<wbr>-724d7092d07e&#39;&quot;<br>
could not change directory to &quot;/root&quot;<br>
              device_id               | vm_id                 |   type    | device |                           address                            | boot_order | spec_params | is_managed | is_plugged | is_readonly |         _create_date          | _update_date | alias | custom_properties | snaps<br>
hot_id | logical_name | is_using_scsi_reservation<br>
------------------------------<wbr>--------+---------------------<wbr>-----------------+-----------+<wbr>--------+---------------------<wbr>------------------------------<wbr>-----------+------------+-----<wbr>--------+------------+--------<wbr>----+-------------+-----------<wbr>--------------------+---------<wbr>-----+-------+----------------<wbr>---+------<br>
-------+--------------+-------<wbr>--------------------<br>
 6207e0d7-4dc9-406d-ab99-3facf<wbr>45788f4 | 497f5e4a-0c76-441a-b72e-724d70<wbr>92d07e | interface | bridge | {slot=0x04, bus=0x00, domain=0x0000, type=pci, function=0x0} |          0 | { }         | f          | t          | f           | 2016-12-02 01:46:42.999885+01 |              | net0 |                   |<br>
       |              | f<br>
(1 row)<br></blockquote><div><br></div><div><br></div><div>But this seams fine now.</div><div>Adding Roy on this.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
[root@ovengine ~]#     sudo -u postgres psql engine -c &quot;select * from vms where vm_guid=&#39;497f5e4a-0c76-441a-b7<wbr>2e-724d7092d07e&#39;&quot;<br>
could not change directory to &quot;/root&quot;<br>
   vm_name    | mem_size_mb | num_of_io_threads | nice_level | cpu_shares |               vmt_guid               | os | description | free_text_comment |             vds_group_id             | creation_date        | auto_startup | is_stateless | is_smartcard_enabled | is_delete_protected | sso_method  |<br>
 dedicated_vm_for_vds | fail_back | default_boot_sequence | vm_type | vm_pool_spice_proxy | vds_group_name | transparent_hugepages | trusted_service |           storage_pool_id            | storage_pool_name |   vds_group_description    | vds_group_spice_proxy     | vmt_name | vmt_mem_size_mb | vmt_os |<br>
 vmt_creation_date    | vmt_child_count | vmt_num_of_sockets | vmt_cpu_per_socket | vmt_threads_per_cpu | vmt_num_of_cpus | vmt_description | status |     vm_ip     | vm_ip_inet_array | vm_host      | vm_pid | last_start_time | guest_cur_user_name | console_cur_user_name |          guest_os          | co<br>
nsole_user_id | guest_agent_nics_hash | run_on_vds              | migrating_to_vds | app_list | vm_pool_name | vm_pool_id |<br>
          vm_guid                | num_of_monitors | single_qxl_pci | allow_console_reconnect | is_initialized | num_of_sockets | cpu_per_socket | threads_per_cpu | usb_policy | acpi_enable | session | num_of_cpus | quota_id | quota_name | quota_enforcement_type | kvm_enable | boot_sequence | utc_diff | last_<br>
vds_run_on | client_ip | guest_requested_memory | time_zone | cpu_user | cpu_sys | memory_usage_history |                                            cpu_usage_history<br>
| network_usage_history                              | elapsed_time | usage_network_percent | disks_usage                              | usage_mem_percent | migration_progress_percent | usage_cpu_percent | run_on_vds_name |    vds_group_cpu_name    | de<br>
fault_display_type | priority | iso_path | origin | vds_group_compatibility_versio<wbr>n | initrd_url | kernel_url | kernel_params | pause_status | exit_message | exit_status | migration_support | predefined_properties | userdefined_properties | min_allocated_mem |         hash         | cpu_pinning | db_generatio<br>
n | host_cpu_flags | tunnel_migration | vnc_keyboard_layout | is_run_and_pause | created_by_user_id | last_watchdog_event | last_watchdog_action | is_run_once |      vm_fqdn      | cpu_name | emulated_machine | current_cd | reason | exit_reason | instance_type_id | image_type_id | architecture | original_temp<br>
late_id | original_template_name |       last_stop_time       | migration_downtime | template_version_number | serial_number_policy | custom_serial_number | is_boot_menu_enabled | guest_cpu_count | next_run_config_exists | numatune_mode | is_spice_file_transfer_enabled | is_spice_copy_paste_enabled |<br>
   cpu_profile_id            | is_auto_converge | is_migrate_compressed | custom_emulated_machine | custom_cpu_name | spice_port | spice_tls_port | spice_ip | vnc_port |     vnc_ip     | guest_agent_status | guest_mem_buffered | guest_mem_cached | guest_mem_free |            small_icon_id             |<br>
     large_icon_id             | provider_id | console_disconnect_action | guest_timezone_offset | guest_timezone_name | guestos_arch | guestos_codename | guestos_distribution |   guestos_kernel_version   | guestos_type | guestos_version<br>
--------------+-------------+-<wbr>------------------+-----------<wbr>-+------------+---------------<wbr>-----------------------+----+-<wbr>------------+-----------------<wbr>--+---------------------------<wbr>-----------+------------------<wbr>----------+--------------+----<wbr>----------+-------------------<wbr>---+---------------------+----<wbr>---------+<br>
----------------------+-------<wbr>----+-----------------------+-<wbr>--------+---------------------<wbr>+----------------+------------<wbr>-----------+-----------------+<wbr>------------------------------<wbr>--------+-------------------+-<wbr>---------------------------+--<wbr>-----------------------------+<wbr>----------+-----------------+-<wbr>-------+--<br>
----------------------+-------<wbr>----------+-------------------<wbr>-+--------------------+-------<wbr>--------------+---------------<wbr>--+-----------------+--------+<wbr>---------------+--------------<wbr>----+-------------------+-----<wbr>---+-----------------+--------<wbr>-------------+----------------<wbr>-------+----------------------<wbr>------+---<br>
--------------+---------------<wbr>--------+---------------------<wbr>-----------------+------------<wbr>------+-----------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>------+--------------+--------<wbr>----+-----<br>
------------------------------<wbr>---+-----------------+--------<wbr>--------+---------------------<wbr>----+----------------+--------<wbr>--------+----------------+----<wbr>-------------+------------+---<wbr>----------+---------+---------<wbr>----+----------+------------+-<wbr>-----------------------+------<wbr>------+---------------+-------<wbr>---+------<br>
-----------+-----------+------<wbr>------------------+-----------<wbr>+----------+---------+--------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>-----------------------+------<wbr>------------------------------<wbr>------------------------------<wbr>------------------------------<wbr>----------<br>
+-----------------------------<wbr>------------------------------<wbr>----------------------+-------<wbr>-------+----------------------<wbr>-+----------------------------<wbr>------------------------------<wbr>------------+-----------------<wbr>--+---------------------------<wbr>-+-------------------+--------<wbr>---------+--------------------<wbr>------+---<br>
-------------------+----------<wbr>+----------+--------+---------<wbr>------------------------+-----<wbr>-------+------------+---------<wbr>------+--------------+--------<wbr>------+-------------+---------<wbr>----------+-------------------<wbr>----+------------------------+<wbr>-------------------+----------<wbr>------------+-------------+---<wbr>----------<br>
--+----------------+----------<wbr>--------+---------------------<wbr>+------------------+----------<wbr>----------+-------------------<wbr>--+----------------------+----<wbr>---------+-------------------+<wbr>----------+------------------+<wbr>------------+--------+--------<wbr>-----+------------------+-----<wbr>----------+--------------+----<wbr>----------<br>
--------+---------------------<wbr>---+--------------------------<wbr>--+--------------------+------<wbr>-------------------+----------<wbr>------------+-----------------<wbr>-----+----------------------+-<wbr>----------------+-------------<wbr>-----------+---------------+--<wbr>------------------------------<wbr>+-----------------------------<wbr>+---------<br>
-----------------------------+<wbr>------------------+-----------<wbr>------------+-----------------<wbr>--------+-----------------+---<wbr>---------+----------------+---<wbr>-------+----------+-----------<wbr>-----+--------------------+---<wbr>-----------------+------------<wbr>------+----------------+------<wbr>------------------------------<wbr>--+-------<br>
------------------------------<wbr>-+-------------+--------------<wbr>-------------+----------------<wbr>-------+---------------------+<wbr>--------------+---------------<wbr>---+----------------------+---<wbr>-------------------------+----<wbr>----------+-----------------<br>
 HostedEngine |        6144 |                 0 |          0 |          0 | 00000000-0000-0000-0000-000000<wbr>000000 |  0 |             |                   | 00000002-0002-0002-0002-000000<wbr>0000ca | 2015-11-03 16:54:06.536+01 | f            | f            | f                    | f                   | guest_agent |<br>
                      | f         |                     0 |       1 |                     | Default        | t                     | f               | 00000001-0001-0001-0001-000000<wbr>000296 | Default           | The default server cluster | <a href="http://ovengine.omme.net:3128" rel="noreferrer" target="_blank">http://ovengine.omme.net:3128</a> | Blank    |            1024 |      0 | 2<br>
008-03-31 23:00:00+02 |              11 |                  1 |                  1 |                   1 |               1 | Blank template  |      1 | 192.168.42.27 | {192.168.42.27}  | <a href="http://ovengine.omme.net" rel="noreferrer" target="_blank">ovengine.omme.net</a> |        |                 | root |                       | 3.10.0-327.36.3.el7.x86_64 |<br>
              |            1234360272 | 572aa833-37fb-4c4b-9576-9d367d<wbr>ef2d04 |                  | kernel-3.10.0-229.14.1.el7,ker<wbr>nel-3.10.0-327.36.3.el7,cloud-<wbr>init-0.7.5-10.el7.centos.1,<wbr>kernel-3.10.0-229.20.1.el7,<wbr>kernel-3.10.0-327.4.4.el7,<wbr>ovirt-guest-agent-common-1.0.<wbr>12-3.el7 |              |            | 497f<br>
5e4a-0c76-441a-b72e-724d7092d0<wbr>7e |               1 | f | f                       | f              |              2 |              1 |               1 |          1 | t |       0 |           2 |          | |                      0 | t          |             0 |        0 |<br>
           |           |                        | Etc/GMT   | 16 |       1 | 37,37,37,37,37,37,38,38,37,37,<wbr>38,38,38,38,38,38,38,38,38,38,<wbr>37,37,37,37,37,37,37,37,37,37,<wbr>37,37,37,37,37,37,37,37,37,37 | 23,16,16,34,22,25,26,22,12,11,<wbr>12,25,11,13,12,14,12,11,8,12,9<wbr>,8,7,13,10,10,9,9,8,8,9,13,9,5<wbr>,6,9,8,11,11,8<br>
| 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,<wbr>0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,<wbr>0,0,0,0,0,0,0,0,0,0 |        55056 |                     0 | [{&quot;path&quot;:&quot;/&quot;,&quot;total&quot;:&quot;84416028<wbr>672&quot;,&quot;used&quot;:&quot;7691350016&quot;,&quot;fs&quot;:<wbr>&quot;ext4&quot;}] |                37 |                          0 |                 8 | kvm02           | Intel SandyBridge Family |<br>
                 0 |        1 |          |      6 | 3.6                             |            | |               |            5 |              |           0 |                 1 |                       | |              6144 | -1125579430873805323 | |            2<br>
2 | f              |                  |                     | f                |                    | |                      | f           | <a href="http://ovengine.omme.net" rel="noreferrer" target="_blank">ovengine.omme.net</a> | |                  |            |        |          -1 |                  |               |            1 |<br>
        |                        | 2016-11-29 11:37:07.501+01 |               1000 |                       1 |                      |                      | f |               2 | f                      | interleave    | t                              | t                           | 0000000e<br>
-000e-000e-000e-000000000039 | |                       | pc                      | |            |                |          |     5900 | <a href="http://kvm02.omme.net" rel="noreferrer" target="_blank">kvm02.omme.net</a> |                  0 |             102080 |           846088 |        3916916 | 1807edab-f180-4268-b364-c6cc9b<wbr>65b602 | 0330e1<br>
6e-25c4-458d-8741-17c92d160d6a |             | LOCK_SCREEN               |                    60 | Europe/Rome         |            1 | Core             | centos               | 3.10.0-327.36.3.el7.x86_64 | Linux        | 7.2.1511<br>
(1 row)<br>
<br>
[root@ovengine ~]#<br>
<br>
<br>
Il 01/12/2016 18:11, Simone Tiraboschi ha scritto:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="h5">
<br>
<br>
On Thu, Dec 1, 2016 at 5:16 PM, Cristian Mammoli &lt;<a href="mailto:c.mammoli@apra.it" target="_blank">c.mammoli@apra.it</a> &lt;mailto:<a href="mailto:c.mammoli@apra.it" target="_blank">c.mammoli@apra.it</a>&gt;&gt; wrote:<br>
<br>
    Here it is:<br>
    <a href="http://cloud.apra.it/index.php/s/4cdcde8cafdb7a1c2c2374b02dce118e" rel="noreferrer" target="_blank">http://cloud.apra.it/index.php<wbr>/s/4cdcde8cafdb7a1c2c2374b02dc<wbr>e118e</a><br>
    &lt;<a href="http://cloud.apra.it/index.php/s/4cdcde8cafdb7a1c2c2374b02dce118e" rel="noreferrer" target="_blank">http://cloud.apra.it/index.ph<wbr>p/s/4cdcde8cafdb7a1c2c2374b02d<wbr>ce118e</a>&gt;<br>
<br>
    I tarred all the agent.log on both servers.<br>
<br>
    The engine was running on kvm01 and got shutdown on kvm01 around<br>
    10:35 AM on 29 November. But I think that&#39;s not the problem, it is<br>
    supposed to shut down if the host can&#39;t reach the gateway.<br>
    Probably the nic problem was already there but got triggered on reboot<br>
<br>
    Btw I kept digging: I extracted the ovf from which vm.conf is<br>
    generated:<br>
<br>
    ovirt_hosted_engine_ha.lib.ovf<wbr>.ovf_store.OVFStore::(getEngin<wbr>eVMOVF)<br>
    OVF_STORE volume path:<br>
    /rhev/data-center/mnt/blockSD/<wbr>2c3585cc-b7bc-4881-85b3-aa6514<wbr>991a26/images/9c5e2121-f1a3-<wbr>4886-964c-c74fdfbbb3c1/ff76505<wbr>5-09c5-4b05-9cc7-5277b15c5d08<br>
<br>
    # tar xvf<br>
    /rhev/data-center/mnt/blockSD/<wbr>2c3585cc-b7bc-4881-85b3-aa6514<wbr>991a26/images/9c5e2121-f1a3-<wbr>4886-964c-c74fdfbbb3c1/ff76505<wbr>5-09c5-4b05-9cc7-5277b15c5d08<br>
    497f5e4a-0c76-441a-b72e-724d70<wbr>92d07e.ovf<br>
    info.json<br>
<br>
    In the ovf file there is no Nic section...<br>
<br>
<br>
Ciao Cristian,<br>
do you see any interface for the engine VM in the engine admin portal?<br>
<br>
Could you please execute this on the engine VM and share its output?<br>
    sudo -u postgres psql engine -c &quot;select * from vm_device where type=&#39;interface&#39; and vm_id=&#39;497f5e4a-0c76-441a-b72e<wbr>-724d7092d07e&#39;&quot;<br>
    sudo -u postgres psql engine -c &quot;select * from vms where vm_guid=&#39;497f5e4a-0c76-441a-b7<wbr>2e-724d7092d07e&#39;&quot;<br>
<br>
thanks<br>
<br>
    I uploaded the ovf on the same share as the logs<br>
<br>
    Ty<br>
<br>
<br>
    Il 01/12/2016 15:26, Yedidyah Bar David ha scritto:<br>
<br>
        On Thu, Dec 1, 2016 at 1:08 PM, Cristian Mammoli<br></div></div><div><div class="h5">
        &lt;<a href="mailto:c.mammoli@apra.it" target="_blank">c.mammoli@apra.it</a> &lt;mailto:<a href="mailto:c.mammoli@apra.it" target="_blank">c.mammoli@apra.it</a>&gt;&gt; wrote:<br>
<br>
            Hi, I upgraded an oVirt installation a month ago to the<br>
            latest 3.6.7. Before<br>
            it was 3.6.0 if I remember correctly.<br>
            Everything went fine so far for a month or so.<br>
<br>
            A couple of days ago the the default gateway got rebooted<br>
            and the physical<br>
            server hosting the HE decided to shut down the vm because<br>
            it could not ping<br>
            the gateway.<br>
            The other host restarted the hevm but it now has *no nic*.<br>
            As a workaround I attached a virtio nic via virsh but<br>
            every time the vm gets<br>
            restarted the nic get lost<br>
<br>
            After a bit of troubleshooting and digging this is what I<br>
            found:<br>
<br>
            This is the /var/run/ovirt-hosted-engine-h<wbr>a/vm.conf which,<br>
            as far as I<br>
            understand, gets extracted from the HE storage domain<br>
<br>
            emulatedMachine=pc<br>
            vmId=497f5e4a-0c76-441a-b72e-7<wbr>24d7092d07e<br>
            smp=2<br>
            memSize=6144<br>
            spiceSecureChannels=smain,sdis<wbr>play,sinputs,scursor,splayback<wbr>,srecord,ssmartcard,susbredir<br>
            vmName=HostedEngine<br>
            display=vnc<br>
            devices={index:0,iface:virtio,<wbr>format:raw,bootOrder:1,address<wbr>:{slot:0x06,bus:0x00,domain:<wbr>0x0000,type:pci,function:0x0},<wbr>volumeID:bb3218ba-cbe9-4cd0-<wbr>b50b-931deae992f7,imageID:<wbr>d65b82e2-2ad1-<br>
            4f4f-bfad-0277c37f2808,readonl<wbr>y:false,domainID:2c3585cc-<wbr>b7bc-4881-85b3-aa6514991a26,<wbr>deviceId:d65b82e2-2ad1-4f4f-<wbr>bfad-0277c37f2808,poolID:<wbr>00000000-0000-0000-0000-<wbr>000000000000,device:disk,<br>
            shared:exclusive,propagateErro<wbr>rs:off,type:disk}<br>
            devices={index:2,iface:ide,sha<wbr>red:false,readonly:true,device<wbr>Id:8c3179ac-b322-4f5c-9449-<wbr>c52e3665e0ae,address:{controll<wbr>er:0,target:0,unit:0,bus:1,<wbr>type:drive},device:cdrom,path:<wbr>,type:disk}<br>
            devices={device:cirrus,alias:v<wbr>ideo0,type:video,deviceId:a994<wbr>68b6-02d4-4a77-8f94-e5df806030<wbr>f6,address:{slot:0x02,bus:<wbr>0x00,domain:0x0000,type:pci,<wbr>function:0x0}}<br>
            devices={device:virtio-serial,<wbr>type:controller,deviceId:b7580<wbr>676-19fb-462f-a61e-677b65ad920<wbr>a,address:{slot:0x03,bus:0x00,<wbr>domain:0x0000,type:pci,<wbr>function:0x0}}<br>
            devices={device:usb,type:contr<wbr>oller,deviceId:c63092b3-7bd8-<wbr>4b54-bcd3-51f34dce478a,address<wbr>:{slot:0x01,bus:0x00,domain:<wbr>0x0000,type:pci,function:0x2}}<br>
            devices={device:ide,type:contr<wbr>oller,deviceId:c77c2c01-6ccc-<wbr>404b-b8d6-5a7f0631a52f,address<wbr>:{slot:0x01,bus:0x00,domain:<wbr>0x0000,type:pci,function:0x1}}<br>
<br>
            As you can see there is no nic, and there is no nic in the<br>
            qemu-kvm<br>
            command-line:<br>
            qemu     23290     1 14 00:23 ?        01:44:26<br>
            /usr/libexec/qemu-kvm -name<br>
            HostedEngine -S -machine<br>
            pc-i440fx-rhel7.2.0,accel=kvm,<wbr>usb=off -cpu<br>
            qemu64,-svm -m 6144 -realtime mlock=off -s<br>
            mp 2,sockets=2,cores=1,threads=1 -uuid<br>
            497f5e4a-0c76-441a-b72e-724d70<wbr>92d07e<br>
            -smbios type=1,manufacturer=oVirt,prod<wbr>uct=oVirt<br>
            Node,version=7-2.1511.el7.cent<wbr>os.2.10,serial=4C4C4544-004B-<wbr>571<br>
            0-8044-B9C04F5A3732,uuid=497f5<wbr>e4a-0c76-441a-b72e-724d7092d07<wbr>e<br>
            -no-user-config -nodefaults -chardev<br>
            socket,id=charmonitor,path=/va<wbr>r/lib/libvirt/qemu/domain-Host<wbr>edEngine/monitor.sock,serve<br>
            r,nowait -mon chardev=charmonitor,id=monitor<wbr>,mode=control -rtc<br>
            base=2016-11-30T23:23:26,drift<wbr>fix=slew -global<br>
            kvm-pit.lost_tick_policy=disca<wbr>rd -no-hpet -no-reboot -boot<br>
            strict=on -device<br>
              piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2 -device<br>
            virtio-serial-pci,id=virtio-se<wbr>rial0,max_ports=16,bus=pci.0,<wbr>addr=0x3<br>
            -drive<br>
            file=/var/run/vdsm/storage/2c3<wbr>585cc-b7bc-4881-85b3-aa6514<br>
            991a26/d65b82e2-2ad1-4f4f-bfad<wbr>-0277c37f2808/bb3218ba-cbe9-<wbr>4cd0-b50b-931deae992f7,if=<wbr>none,id=drive-virtio-disk0,for<wbr>mat=raw,serial=d65b82e2-2ad1-<wbr>4f4f-bfad-0277c37f2808,cache=<wbr>none,werror=st<br>
            op,rerror=stop,aio=native -device<br>
            virtio-blk-pci,scsi=off,bus=pc<wbr>i.0,addr=0x6,drive=drive-virti<wbr>o-disk0,id=virtio-disk0,bootin<wbr>dex=1<br>
            -drive if=none,id=drive-ide0-1-0,read<wbr>only=on,format=raw<br>
            -device<br>
            ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0<br>
            -chardev<br>
            socket,id=charchannel0,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>97f5e4a-0c76-441a-b72e-724d709<wbr>2d07e.com.redhat.rhevm<br>
            .vdsm,server,nowait -device<br>
            virtserialport,bus=virtio-seri<wbr>al0.0,nr=1,chardev=charchannel<wbr>0,id=channel0,name=com.redhat.<wbr>rhevm.vdsm<br>
            -chardev socket,id=charchannel1,path=/v<wbr>ar/lib/libvirt/qem<br>
            u/channels/497f5e4a-0c76-441a-<wbr>b72e-724d7092d07e.org.qemu.gue<wbr>st_agent.0,server,nowait<br>
            -device<br>
            virtserialport,bus=virtio-seri<wbr>al0.0,nr=2,chardev=charchannel<wbr>1,id=channel1,name=org.qemu.<wbr>guest<br>
            _agent.0 -chardev<br>
            socket,id=charchannel2,path=/v<wbr>ar/lib/libvirt/qemu/channels/4<wbr>97f5e4a-0c76-441a-b72e-724d709<wbr>2d07e.org.ovirt.hosted-engine-<wbr>setup.0,server,nowait<br>
            -device virtserialport,bus<br>
            =virtio-serial0.0,nr=3,chardev<wbr>=charchannel2,id=channel2,<wbr>name=org.ovirt.hosted-engine-<wbr>setup.0<br>
            -vnc 0:0,password -device<br>
            cirrus-vga,id=video0,bus=pci.0<wbr>,addr=0x2 -msg<br>
            timestamp=on<br>
<br>
            I extracted the vm.conf from the storage domain and the<br>
            nic is there:<br>
            mId=497f5e4a-0c76-441a-b72e-72<wbr>4d7092d07e<br>
            memSize=6144<br>
            display=vnc<br>
            devices={index:2,iface:ide,add<wbr>ress:{ controller:0,<br>
            target:0,unit:0, bus:1,<br>
            type:drive},specParams:{},read<wbr>only:true,deviceId:857b98b3-<wbr>cf43-4c2d-8061-e7f105234a65,<wbr>path:,device:cdrom,shared<br>
            :false,type:disk}<br>
            devices={index:0,iface:virtio,<wbr>format:raw,poolID:00000000-000<wbr>0-0000-0000-000000000000,volum<wbr>eID:bb3218ba-cbe9-4cd0-b50b-<wbr>931deae992f7,imageID:d65b82e2-<wbr>2ad1-4f4f-bfad-0277c37f2808,<wbr>specParams<br>
            :{},readonly:false,domainID:2c<wbr>3585cc-b7bc-4881-85b3-aa651499<wbr>1a26,optional:false,deviceId:<wbr>d65b82e2-2ad1-4f4f-bfad-<wbr>0277c37f2808,address:{bus:<wbr>0x00,<br>
            slot:0x06, domain:0x0000, type:pci, funct<br>
            ion:0x0},device:disk,shared:ex<wbr>clusive,propagateErrors:off,ty<wbr>pe:disk,bootOrder:1}<br>
            devices={device:scsi,model:vir<wbr>tio-scsi,type:controller}<br>
            devices={nicModel:pv,macAddr:0<wbr>0:16:3e:7d:d8:27,linkActive:tr<wbr>ue,network:ovirtmgmt,filter:vd<wbr>sm-no-mac-spoofing,specParams:<wbr>{},deviceId:5be8a089-9f51-<wbr>46dc-a8bd-28422985aa35,<wbr>address:{bus:0x00<br>
            , slot:0x03, domain:0x0000, type:pci,<br>
            function:0x0},device:bridge,ty<wbr>pe:interface}<br>
            devices={device:console,specPa<wbr>rams:{},type:console,deviceId:<wbr>1644f556-a4ff-4c93-8945-<wbr>5aa165de2a85,alias:console0}<br>
            vmName=HostedEngine<br>
            spiceSecureChannels=smain,sdis<wbr>play,sinputs,scursor,splayback<wbr>,srecord,ssmartcard,susbredir<br>
            smp=2<br>
            cpuType=SandyBridge<br>
            emulatedMachine=pc<br>
<br>
            The local vm.conf gets continuosly overwritten but for<br>
            some reason the nic<br>
            line gets lost in the process.<br>
<br>
        Can you please check/share<br>
        /var/log/ovirt-hosted-engine-h<wbr>a/agent.log?<br>
        Preferably all of it (including backups)? Thanks.<br>
<br>
<br>
    --     Mammoli Cristian<br>
    System administrator<br></div></div>
    T. <a href="tel:%2B39%200731%2022911" value="+39073122911" target="_blank">+39 0731 22911</a> &lt;tel:%2B39%200731%2022911&gt;<span class=""><br>
    Via Brodolini 6 | 60035 Jesi (an)<br>
<br>
<br>
    ______________________________<wbr>_________________<br>
    Users mailing list<br></span>
    <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> &lt;mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>&gt;<br>
    <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
    &lt;<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailma<wbr>n/listinfo/users</a>&gt;<br>
<br>
<br>
</blockquote><div class="HOEnZb"><div class="h5">
<br>
-- <br>
Mammoli Cristian<br>
System administrator<br>
T. <a href="tel:%2B39%200731%2022911" value="+39073122911" target="_blank">+39 0731 22911</a><br>
Via Brodolini 6 | 60035 Jesi (an)<br>
<br>
</div></div></blockquote></div><br></div></div>