I made more tests and i have two hosts (hosted_engine1 and hosted_engine2) configured to be hosted engine and after sent down the network interface (ifdown bond0 and i have ovirtmgmt over bond0)
on hosted_engine1, the hosted_engined2 try to start engine:


==> /var/log/ovirt-hosted-engine-ha/agent.log <==
MainThread::INFO::2016-11-07 18:17:18,598::ovf_store::103::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:8f567072-82d7-4a80-b643-76a1e1d273ef, volUUID:7b12f649-b8cd-426a-95d8-37bb223ac86f
MainThread::INFO::2016-11-07 18:17:18,875::ovf_store::112::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) Extracting Engine VM OVF from the OVF_STORE
MainThread::INFO::2016-11-07 18:17:18,896::ovf_store::119::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF) OVF_STORE volume path: /rhev/data-center/mnt/blockSD/4c0104e0-dc8d-44b6-ab70-9d46599eb7f6/images/8f567072-82d7-4a80-b643-76a1e1d273ef/7b12f649-b8cd-426a-95d8-37bb223ac86f 
MainThread::INFO::2016-11-07 18:17:18,991::config::226::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Found an OVF for HE VM, trying to convert
MainThread::INFO::2016-11-07 18:17:18,994::config::231::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(refresh_local_conf_file) Got vm.conf from OVF_STORE
MainThread::INFO::2016-11-07 18:17:18,994::hosted_engine::1148::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Ensuring VDSM state is clear for engine VM
MainThread::INFO::2016-11-07 18:17:19,011::hosted_engine::1160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_clean_vdsm_state) Vdsm state for VM clean
MainThread::INFO::2016-11-07 18:17:19,011::hosted_engine::1109::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Starting vm using `/usr/sbin/hosted-engine --vm-start`
MainThread::INFO::2016-11-07 18:17:19,410::hosted_engine::1115::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stdout: 
fe6a0618-62aa-434d-b9b8-459603a690ce
	Status = WaitForLaunch
	nicModel = rtl8139,pv
	statusTime = 4905357140
	emulatedMachine = pc
	pid = 0
	vmName = HostedEngine
	devices = [{'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'address': 'None', 'volumeID': '5e24a8ba-ff1f-409b-918c-b0c31565ae89', 'imageID': '23ec5c8d-d154-488b-8dd2-de29a0fd941d', 'readonly': 'false', 'domainID': '4c0104e0-dc8d-44b6-ab70-9d46599eb7f6', 'deviceId': '23ec5c8d-d154-488b-8dd2-de29a0fd941d', 'poolID': '00000000-0000-0000-0000-000000000000', 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:1c:f3:7c', 'linkActive': 'true', 'network': 'ovirtmgmt', 'deviceId': '113617eb-d506-4df1-b542-26248ffd8027', 'address': 'None', 'device': 'bridge', 'type': 'interface'}, {'index': '2', 'iface': 'ide', 'readonly': 'true', 'deviceId': '8c3179ac-b322-4f5c-9449-c52e3665e0ae', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}]
	guestDiskMapping = {}
	vmType = kvm
	displaySecurePort = -1
	memSize = 4096
	displayPort = -1
	clientIp = 
	spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
	smp = 2
	displayIp = 0
	display = vnc

MainThread::INFO::2016-11-07 18:17:19,411::hosted_engine::1116::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) stderr: /usr/share/vdsm/vdsClient.py:33: DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is deprecated, please use vdsm.jsonrpcvdscli
  from vdsm import utils, vdscli, constants
/usr/share/vdsm/vdsClient.py:33: DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is deprecated, please use vdsm.jsonrpcvdscli
  from vdsm import utils, vdscli, constants

MainThread::INFO::2016-11-07 18:17:19,411::hosted_engine::1128::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_start_engine_vm) Engine VM started on localhost
MainThread::INFO::2016-11-07 18:17:19,416::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Trying: notify time=1478542639.42 type=state_transition detail=EngineStart-EngineStarting hostname='ied-blade11.install.eurotux.local'
MainThread::INFO::2016-11-07 18:17:19,681::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify) Success, was notification of state_transition (EngineStart-EngineStarting) sent? sent
MainThread::INFO::2016-11-07 18:17:19,681::hosted_engine::612::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_vdsm) Initializing VDSM

But i get the following error and don't see the hosted_engine1 receiving fence:

==> /var/log/ovirt-hosted-engine-ha/broker.log <==
Thread-1007::INFO::2016-11-07 18:17:21,407::ping::52::ping.Ping::(action) Successfully pinged 10.10.4.254
Thread-297080::INFO::2016-11-07 18:17:21,622::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup) Connection established
Thread-297080::INFO::2016-11-07 18:17:21,676::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Connection closed
Thread-297081::INFO::2016-11-07 18:17:21,676::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup) Connection established
Thread-297081::INFO::2016-11-07 18:17:21,678::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Connection closed
Thread-297082::INFO::2016-11-07 18:17:21,678::listener::134::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(setup) Connection established
Thread-297082::INFO::2016-11-07 18:17:21,680::listener::186::ovirt_hosted_engine_ha.broker.listener.ConnectionHandler::(handle) Connection closed
Thread-1010::ERROR::2016-11-07 18:17:23,596::cpu_load_no_engine::156::cpu_load_no_engine.EngineHealth::(update_stat_file) Failed to getVmStats: 'pid'
Thread-1010::INFO::2016-11-07 18:17:23,597::cpu_load_no_engine::121::cpu_load_no_engine.EngineHealth::(calculate_load) System load total=0.0732, engine=0.0000, non-engine=0.0732

Can you help me to understand what goes wrong?

Regards,
Carlos Rodrigues

On Tue, 2016-08-23 at 14:29 +0100, Carlos Rodrigues wrote:
On Tue, 2016-08-23 at 14:16 +0200, Simone Tiraboschi wrote:
On Mon, Aug 22, 2016 at 1:10 PM, Carlos Rodrigues <cmar@eurotux.com> wrote:
On Fri, 2016-08-19 at 11:50 +0100, Carlos Rodrigues wrote:
On Fri, 2016-08-19 at 12:24 +0200, Simone Tiraboschi wrote:
On Fri, Aug 19, 2016 at 12:07 PM, Carlos Rodrigues <cmar@eurotu x.co m> wrote:
On Fri, 2016-08-19 at 10:47 +0100, Carlos Rodrigues wrote:
On Fri, 2016-08-19 at 11:36 +0200, Simone Tiraboschi wrote:
On Fri, Aug 19, 2016 at 11:29 AM, Carlos Rodrigues <cmar@ euro tu x.co m> wrote:
After night, the OVF_STORE it was created:
It's quite strange that it got so long but now it looks fine. If the ISO_DOMAIN that I see in your screenshot is served by the engine VM itself, I suggest to remove it and export from an external server. Serving the ISO storage domain from the engine VM itself is not a good idea since when the engine VM is down you can experiment long delays before getting the engine VM restarted due to the unavailable storage domain.
Ok, thank you for advice. Now, apparently is all ok. I'll do more tests with HA and any issue i'll tell you. Thank you for your support. Regards, Carlos Rodrigues
I shutdown the network of host with engine VM and i expected that other host fence the host and start engine VM but i don't see any fence action and the "free" host keep trying to start VM but get and error of sanlock Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu- kvm: sending ioctl 5326 to a partition! Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: qemu- kvm: sending ioctl 80200204 to a partition! Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7867]: 1 guest now active Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016- 08-19 11:03:03+0100 1023 [903]: r3 paxos_acquire owner 1 delta 1 9 245502 alive Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016- 08-19 11:03:03+0100 1023 [903]: r3 acquire_token held error -243 Aug 19 11:03:03 ied-blade11.install.eurotux.local sanlock[884]: 2016- 08-19 11:03:03+0100 1023 [903]: r3 cmd_acquire 2,9,7862 acquire_token -243 lease owned by other host Aug 19 11:03:03 ied-blade11.install.eurotux.local libvirtd[1369]: resource busy: Failed to acquire lock: error -243 Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt: port 2(vnet0) entered disabled state Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: device vnet0 left promiscuous mode Aug 19 11:03:03 ied-blade11.install.eurotux.local kernel: ovirtmgmt: port 2(vnet0) entered disabled state Aug 19 11:03:03 ied-blade11.install.eurotux.local kvm[7885]: 0 guests now active Aug 19 11:03:03 ied-blade11.install.eurotux.local systemd- machined[7863]: Machine qemu-4-HostedEngine terminated.
Maybe you hit this one: https://bugzilla.redhat.com/show_bug.cgi?id=1322849 Can you please check it as described in comment 28 and eventually apply the workaround in comment 18?
Apparently the host-id its ok. I don't need to apply the workaround.
Any other suggestion? I check that second host doens't fence the failed host and maybe this cause the lock of hosted storage domain.
The picture isn't still that clear to me: ovirt-ha-agent is not trying to fence other hosted-engine hosts. The communication between hosts simply acts trough the metadata area on the shared storage. Do you also loose the connection to the storage domain when you play with the management network on your host?
No, the storage domain is connected over fiber channel, we saw that agent tries to read metadata and tries to start engine VM but after couple seconds it shuts down the VM because its not able to acquire san lock.
After recover network, engine VM check hosts and send fence commands to the hosts. In some cases, it sends fence commands to both hosts. Regards, Carlos Rodrigues
Regards, Carlos Rodrigues On Fri, 2016-08-19 at 08:29 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 6:38 PM, Carlos Rodrigues <cm ar@e ur otux .c om> wrote:
On Thu, 2016-08-18 at 17:45 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 5:43 PM, Carlos Rodrigues <cmar @eur ot ux.com> wrote:
I increase hosted_engine disk space to 160G. How do i force to create OVF_STORE.
I think that restarting the engine on the engine VM will trigger it although I'm not sure that it was a size issue.
I found to OVF_STORE on another storage domain with "Domain Type" "Data (Master)"
Each storage domain has its own OVF_STORE volumes; you should get them also on the hosted-engine storage domain. Not really sure about how to trigger it again; adding Roy here.
Regards, Carlos Rodrigues On Thu, 2016-08-18 at 12:14 +0100, Carlos Rodrigues wrote:
On Thu, 2016-08-18 at 12:34 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 12:11 PM, Carlos Rodrigues <cma r@eurotux.co m> wrote:
On Thu, 2016-08-18 at 11:53 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 11:50 AM, Carlos Rodrigues <cmar@eurotu x. com> wrote:
On Thu, 2016-08-18 at 11:42 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 11:25 AM, Carlos Rodrigues <cmar@eu ro tux. com> wrote:
On Thu, 2016-08-18 at 11:04 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 10:36 AM, Carlos Rodrigues <cmar@ euro tux.com> wrote:
On Thu, 2016-08-18 at 10:27 +0200, Simone Tiraboschi wrote:
On Thu, Aug 18, 2016 at 10:22 AM, Carlos Rodrigues <cmar@ eurotux. com> wrote:
On Thu, 2016-08-18 at 08:54 +0200, Simone Tiraboschi wrote:
On Tue, Aug 16, 2016 at 12:53 PM, Carlos Rodrigues <c mar@euro tux. com> wrote:
On Sun, 2016-08-14 at 14:22 +0300, Roy Golan wrote:
On 12 August 2016 at 20:23, Carlos Rodrigues <cma r@eurotu x.co m> wrote:
Hello, I have one cluster with two hosts with power management correctly configured and one virtual machine with HostedEngine over shared storage with FiberChannel. When i shutdown the network of host with HostedEngine VM,  it should be possible the HostedEngine VM migrate automatically to another host?
migrate on which network?
What is the expected behaviour on this HA scenario?
After a few minutes your vm will be shutdown by the High Availability agent, as it can't see network, and started on another host.
I'm testing this scenario and after shutdown network, it should be expected that agent shutdown ha and started on another host, but after couple minutes nothing happens and on host with network we getting the following messages: Aug 16 11:44:08 ied- blade11.install.eurot ux.l oc al ovirt-ha- agent[2779]: ovirt-ha-agent ovirt_hosted_engine_h a.ag en t.ho st ed_engine.Ho st edEn gine.con fig ERROR Unable to get vm.conf from OVF_STORE, falling back to initial vm.conf I think the HA agent its trying to get vm configuration but some how it can't get vm.conf to start VM.
No, this is a different issues. In 3.6 we added a feature to let the engine manage also the engine VM itself; ovirt-ha-agent will pickup the latest engine VM configuration from the OVF_STORE which is managed by the engine. If something goes wrong, ovirt- ha- agent could fallback to the initial (bootstrap time) vm.conf. This will normally happen till you add your first regular storage domain and the engine imports the engine VM.
But i already have my first storage domain and storage engine domain and already imported engine VM. I'm using 4.0 version.
This seams an issue, can you please share your /var/log/ovirt-hosted- engine- ha/agent.log ?
I sent it in attachment.
Nothing strange here; do you see a couple of disks with alias OVF_STORE on the hosted- engine storage domain if you check it from the engine?
Do you mean any disk label? I don't have it anyone: [root@ied-blade11 ~]#  ls /dev/disk/by- label/ ls: cannot access /dev/disk/by- label/: No such file or directory
No I mean: go to the engine web-ui, select the hosted- engine storage domain, check the disks there.
No, the alias is virtio-disk0.
And this is the engine VM disk, so the issue is why the engine has still to create the OVF_STORE. Can you please share your engine.log from the engine VM?
Go in attachment.
The creation of the OVF_STORE disk failed but it's not that clear why: 2016-08-17 08:43:33,538 ERROR [org.ovirt.engine.core.bll.storage.ovfstore .Cre at eOvf Vo lumeForStora ge DomainCommand] (DefaultQuartzScheduler6) [6f1f1fd4] Ending command 'org.ovirt.engine.core.bll.storage.ovfstore .Cre at eOvf Vo lumeForStora ge DomainCommand' with failure. 2016-08-17 08:43:33,540 ERROR [org.ovirt.engine.core.bll.storage.disk.Add Disk Co mman d] (DefaultQuartzScheduler6) [6f1f1fd4] Ending command 'org.ovirt.engine.core.bll.storage.disk.Add Disk Co mman d' with failure. 2016-08-17 08:43:33,541 WARN [org.ovirt.engine.core.bll.storage.disk.Add Disk Co mman d] (DefaultQuartzScheduler6) [6f1f1fd4] VmCommand::EndVmCommand: Vm is null - not performing endAction on Vm 2016-08-17 08:43:33,553 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlo ghan dl ing. Au ditLogDirect or ] (DefaultQuartzScheduler6) [6f1f1fd4] Correlation ID: 6f1f1fd4, Call Stack: null, Custom Event ID: -1, Message: Add- Disk operation failed to complete. 2016-08-17 08:43:33,557 WARN [org.ovirt.engine.core.dal.dbbroker.auditlo ghan dl ing. Au ditLogDirect or ] (DefaultQuartzScheduler6) [] Correlation ID: 19ac5bda, Call Stack: null, Custom Event ID: -1, Message: Failed to create OVF store disk for Storage Domain hosted_storage.  OVF data won't be updated meanwhile for that domain. 2016-08-17 08:43:33,585 INFO [org.ovirt.engine.core.bll.SerialChildComma ndsE xe cuti on Callback] (DefaultQuartzScheduler6) [5f5a8daf] Command 'ProcessOvfUpdateForStorageDomain' (id: '71aaaafe-7b9e-45e8-a40c-6d33bdf646a0') waiting on child command id: 'eb2e6f1a-c756-4ccd-85a1-60d97d6880de' type:'CreateOvfVolumeForStorageDomain' to complete 2016-08-17 08:43:33,595 ERROR [org.ovirt.engine.core.bll.storage.ovfstore .Cre at eOvf Vo lumeForStora ge DomainCommand] (DefaultQuartzScheduler6) [5d314e49] Ending command 'org.ovirt.engine.core.bll.storage.ovfstore .Cre at eOvf Vo lumeForStora ge DomainCommand' with failure. 2016-08-17 08:43:33,596 ERROR [org.ovirt.engine.core.bll.storage.disk.Add Disk Co mman d] (DefaultQuartzScheduler6) [5d314e49] Ending command 'org.ovirt.engine.core.bll.storage.disk.Add Disk Co mman d' with failure. 2016-08-17 08:43:33,596 WARN [org.ovirt.engine.core.bll.storage.disk.Add Disk Co mman d] (DefaultQuartzScheduler6) [5d314e49] VmCommand::EndVmCommand: Vm is null - not performing endAction on Vm 2016-08-17 08:43:33,602 ERROR [org.ovirt.engine.core.dal.dbbroker.auditlo ghan dl ing. Au ditLogDirect or ] (DefaultQuartzScheduler6) [5d314e49] Correlation ID: 5d314e49, Call Stack: null, Custom Event ID: -1, Message: Add- Disk operation failed to complete. 2016-08-17 08:43:33,605 WARN [org.ovirt.engine.core.dal.dbbroker.auditlo ghan dl ing. Au ditLogDirect or ] (DefaultQuartzScheduler6) [] Correlation ID: 5f5a8daf, Call Stack: null, Custom Event ID: -1, Message: Failed to create OVF store disk for Storage Domain hosted_storage.  OVF data won't be updated meanwhile for that domain. 2016-08-17 08:43:36,460 INFO [org.ovirt.engine.core.bll.scheduling.HaRes erva ti onHa nd ling] (DefaultQuartzScheduler7) [5d314e49] HA reservation status for cluster 'Default' is 'OK' 2016-08-17 08:43:36,662 INFO [org.ovirt.engine.core.bll.SerialChildComma ndsE xe cuti on Callback] (DefaultQuartzScheduler4) [5f5a8daf] Command 'ProcessOvfUpdateForStorageDomain' id: '71aaaafe-7b9e-45e8-a40c-6d33bdf646a0' child commands '[84959a4b-6a10-4d22-b37e-6c154e17a0da, eb2e6f1a-c756-4ccd-85a1-60d97d6880de]' executions were completed, status 'FAILED' 2016-08-17 08:43:37,691 ERROR [org.ovirt.engine.core.bll.storage.ovfstore .Pro ce ssOv fU pdateForStor ag eDomainCommand] (DefaultQuartzScheduler6) [5f5a8daf] Ending command 'org.ovirt.engine.core.bll.storage.ovfstore .Pro ce ssOv fU pdateForStor ag eDomainCommand' with failure. Can you please check vdsm logs for that time frame on the SPM host?
I sent in attachment the vdsm logs from both hosts, but i think the SPM host on this time frame it was ied-blade13
It seams that you also have an issue in the SPM election procedure: 2016-08-17 18:04:31,053 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker. IrsP ro xyDa ta ] (DefaultQuartzScheduler1) [] SPM Init: could not find reported vds or not up - pool: 'Default' vds_spm_id: '2' 2016-08-17 18:04:31,076 INFO [org.ovirt.engine.core.vdsbroker.irsbroker. IrsP ro xyDa ta ] (DefaultQuartzScheduler1) [] SPM selection - vds seems as spm 'hosted_engine_2' 2016-08-17 18:04:31,076 WARN [org.ovirt.engine.core.vdsbroker.irsbroker. IrsP ro xyDa ta ] (DefaultQuartzScheduler1) [] spm vds is non responsive, stopping spm selection. 2016-08-17 18:04:31,539 INFO [org.ovirt.engine.core.vdsbroker.monitoring .Vms St atis ti csFetcher] (DefaultQuartzScheduler7) [] Fetched 1 VMs from VDS '06372186-572c-41ad-916f-7cbb0aba5302' probably due to: 2016-08-17 18:02:33,569 ERROR [org.ovirt.engine.core.vdsbroker.monitoring .Hos tM onit or ing] (DefaultQuartzScheduler6) [] Failure to refresh Vds runtime info: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues 2016-08-17 18:02:33,569 ERROR [org.ovirt.engine.core.vdsbroker.monitoring .Hos tM onit or ing] (DefaultQuartzScheduler6) [] Exception: org.ovirt.engine.core.vdsbroker.vdsbroker.V DSNe tw orkE xc eption: VDSGenericException: VDSNetworkException: Message timeout which can be caused by communication issues
This messages maybe cause by connection issues from yesterday at 6pm.
can you please check if the engine VM could correctly resolve and reach each host?
Now i can read engine VM from both hosts [root@ied-blade11 ~]# ping ied-hosted-engine PING ied-hosted-engine.install.eurotux.local (10.10.4.115) 56(84) bytes of data. 64 bytes from ied-hosted- engine.install.eurotux.local (10.10.4.115): icmp_seq=1 ttl=64 time=0.179 ms 64 bytes from ied-hosted- engine.install.eurotux.local (10.10.4.115): icmp_seq=2 ttl=64 time=0.141 ms ^C --- ied-hosted-engine.install.eurotux.local ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.141/0.160/0.179/0.019 ms [root@ied-blade13 ~]# ping ied-hosted-engine PING ied-hosted-engine.install.eurotux.local (10.10.4.115) 56(84) bytes of data. 64 bytes from ied-hosted- engine.install.eurotux.local (10.10.4.115): icmp_seq=1 ttl=64 time=0.172 ms 64 bytes from ied-hosted- engine.install.eurotux.local (10.10.4.115): icmp_seq=2 ttl=64 time=0.169 ms ^C --- ied-hosted-engine.install.eurotux.local ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.169/0.170/0.172/0.013 ms I have a message critical of low disk space on hosted_storage domain. I have 50G of disk and i created i VM with 40G. Do i need more space of OVF_STORAGE? What is the minimum requirements of disk space for deploy engine VM? Regards, Carlos
Regards, Carlos Rodrigues
Regards, -- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | w ww.eurotux.c om (t) +351 253 680 300 (m) +351 911 926 110 _________________ ____ __ ____ __ ____________ __ ____ Users mailing list Users@ovirt.org http://lists.ovir t.or g/ mail ma n/listinfo/u se rs
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A.
www.e urotux.com (t) +351 253 680 300 (m) +351 911 926 110 _____________________ ____ __ ____ __ ____________ __ Users mailing list Users@ovirt.org http://lists.ovirt.or g/ma il man/ li stinfo/users
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A.
www .eur ot ux.com (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.e ur otux .c om (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.e urot ux .com (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurot ux.c om (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurotux.c om (t) +351 253 680 300 (m) +351 911 926 110
_____________________________________________ __ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110
-- Carlos Rodrigues Engenheiro de Software Sénior Eurotux Informática, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110
-- 
Carlos Rodrigues 

Engenheiro de Software Sénior

Eurotux Informática, S.A. | www.eurotux.com
(t) +351 253 680 300 (m) +351 911 926 110