Export Domain no show
by Rajat Patel
Hi Ovirt,
We are using Ovirt 4.1 selfhosted engine attach nfs storega as
data/iso/export, we have one image which we want to import
(manageiq-ovirt-fine-4.qc2). We did copy to our export location
(/export/3157c57b-8f6a-4709-862a-713bfa59899a) and chnage the owner ship to
(chown -R 36:36 manageiq-ovirt-fine-4.qc2)issues is we are not able to see
at ovirtUI-->>Strorage-->>export-->>VM Import nither at Template Import. At
the same time not able to see error at logs.
Regards
Techieim
6 years, 11 months
ovirt-engine installation problem
by David David
Hello.
CentOS Linux release 7.4.1708 (Core)
# yum install http://resources.ovirt.org/pub/yum-repo ... ease41.rpm
<http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm>
# yum install ovirt-engine
--> Finished Dependency Resolution
Error: Package: systemd-python-219-42.el7.x86_64 (base)
Requires: systemd = 219-42.el7
Installed: systemd-219-42.el7_4.4.x86_64 (@updates)
systemd = 219-42.el7_4.4
Available: systemd-219-42.el7.x86_64 (base)
systemd = 219-42.el7
Error: Package: glibc-2.17-196.el7.i686 (base)
Requires: glibc-common = 2.17-196.el7
Installed: glibc-common-2.17-196.el7_4.2.x86_64 (@updates)
glibc-common = 2.17-196.el7_4.2
Available: glibc-common-2.17-196.el7.x86_64 (base)
glibc-common = 2.17-196.el7
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
# yum clean all
# yum update
- don't help.
how to fix it ?
6 years, 11 months
Move disk between domains
by Matthew DeBoer
When i try to move a specific disk between storage domains i get an error.
2017-12-08 11:26:05,257-06 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-41) [] Permutation name: 8C01181C3B121D0AAE1312275CC96415
2017-12-08 11:26:05,257-06 ERROR
[org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService]
(default task-41) [] Uncaught exception:
com.google.gwt.core.client.JavaScriptException: (TypeError)
__gwt$exception: <skipped>: Cannot read property 'F' of null
at
org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120)
at
org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120)
at
org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233)
[frontend.jar:]
at
org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233)
[frontend.jar:]
at
org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.$onSuccess(OperationProcessor.java:139)
[frontend.jar:]
at
org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.onSuccess(OperationProcessor.java:139)
[frontend.jar:]
at
org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269)
[frontend.jar:]
at
org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269)
[frontend.jar:]
at
com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198)
[gwt-servlet.jar:]
at
com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237)
[gwt-servlet.jar:]
at
com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409)
[gwt-servlet.jar:]
at Unknown.eval(webadmin-0.js@65)
at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296)
[gwt-servlet.jar:]
at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335)
[gwt-servlet.jar:]
at Unknown.eval(webadmin-0.js@54)
All the other disks i can move.
The issue here is how i got this storage domain into ovirt i think.
I set up a new cluster using 4.1 coming from 3.6.
I imported a domain from the 3.6 cluster. I am trying to move this disk to
one of the new storage domains on the 4.1 cluster.
Any help would be greatly appreciated
6 years, 11 months
import thin provisioned disks with image upload
by Gianluca Cecchi
On Thu, Dec 7, 2017 at 1:28 AM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
>
> On Thu, Dec 7, 2017 at 1:22 AM Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
> wrote:
>
>> On Wed, Dec 6, 2017 at 11:42 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>>>
>>>
>>>>
>>>> BTW: I notice that the disk seems preallocated even if original qcow2
>>>> is thin... is this expected?
>>>> This obviously also impacts the time to upload (20Gb virtual disk with
>>>> actual 1.2Gb occupied needs the time equivalent for full 20Gb...)
>>>>
>>>
>>> We upload exactly the file you provided, there is no way we can upload
>>> 20G from 1.2G file :-)
>>>
>>
>> But the upload process at a medium rate of 40-50MB/s has last about 9
>> minutes that confirms the 20Gb size
>> The disk at source has been created as virtio type and format qcow2 from
>> virt-manager and then only installed a CentOS 7.2 OS with infrastructure
>> server configuration.
>> Apart from qemu-img also ls:
>> # ls -lhs c7lab1.qcow2
>> 1.3G -rw------- 1 root root 21G Dec 6 23:05 c7lab1.qcow2
>>
>
> The fiile size is 21G - matching what you see. This is the size we upload.
>
I changed the subject to better address and track the thread..
You are not convincing me...
The term "file size" is ambiguous in this context...
If I take a disk that was born on this oVirt environment itself I have this:
https://drive.google.com/file/d/1Dtb69EKa8adNYiwUDs2d-plSMZIM0Bzr/view?us...
that shows that is thin provisioned
Its id shown here:
https://drive.google.com/file/d/1PvLjJVQ3JR4_6v5Da7ATac5ReEObxH-A/view?us...
If I go and check on my exported file system I get this
[root@ovirt01 NFS_DOMAIN]# find . -name
"3c68d43f-0f28-4564-b557-d390a125daa6"
./572eabe7-15d0-42c2-8fa9-0bd773e22e2e/images/3c68d43f-0f28-4564-b557-d390a125daa6
[root@ovirt01 NFS_DOMAIN]# ls -lsh
./572eabe7-15d0-42c2-8fa9-0bd773e22e2e/images/3c68d43f-0f28-4564-b557-d390a125daa6
total 8.6G
8.6G -rw-rw----. 1 vdsm kvm 10G Dec 7 08:42
09ad8e53-0b22-4fe3-b718-d14352b8290a
1.0M -rw-rw----. 1 vdsm kvm 1.0M May 1 2016
09ad8e53-0b22-4fe3-b718-d14352b8290a.lease
4.0K -rw-r--r--. 1 vdsm kvm 317 Jun 21 10:41
09ad8e53-0b22-4fe3-b718-d14352b8290a.meta
[root@ovirt01 NFS_DOMAIN]# qemu-img info
./572eabe7-15d0-42c2-8fa9-0bd773e22e2e/images/3c68d43f-0f28-4564-b557-d390a125daa6/09ad8e53-0b22-4fe3-b718-d14352b8290a
image:
./572eabe7-15d0-42c2-8fa9-0bd773e22e2e/images/3c68d43f-0f28-4564-b557-d390a125daa6/09ad8e53-0b22-4fe3-b718-d14352b8290a
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 8.5G
[root@ovirt01 NFS_DOMAIN]#
So it seems it is raw/sparse
https://www.ovirt.org/documentation/admin-guide/chap-Virtual_Machine_Disks/
>
> 1.3G is the used size on the file system, we cannot upload only used
> blocks.
> qemu-img info "Disk size" is not the file size the the used size, not
> useful
> for upload.
>
Why not? You can detect the format of source and use a similar strategy of
the bugzilla I'm referring (see below)
Or couldn't you use something like
rsync -av --sparse source target
if the target is NFS?
> Maybe this file was crated with preallocation=full?
>
In virt-manager there is not this logic by default.
The default format is qcow2/sparse
When you create/add a disk you can choose "Select or create custom storage":
https://drive.google.com/file/d/1gAwjAG5aRFC5fFgkoZXT5wINvQnov88p/view?us...
And then choose a format between:
raw, qcow, qcow2, qed, vmdk, vpc, vdi
In my case it was the default one, so qcow2, as qemu-img correctly detects.
>
>>>>
>> First message at beginning of upload:
>>
>> 2017-12-06 23:09:50,183+0100 INFO (jsonrpc/4) [vdsm.api] START
>> createVolume(sdUUID=u'572eabe7-15d0-42c2-8fa9-0bd773e22e2e',
>> spUUID=u'00000001-0001-0001-0001-000000000343',
>> imgUUID=u'251063f6-5570-4bdc-b28f-21e82aa5e185', size=u'22548578304',
>> volFormat=4, preallocate=2, diskType=2, volUUID=u'77aacfb3-9e67-4ad2-96f6-242b5ba4d9e0',
>> desc=u'{"DiskAlias":"c7lab1","DiskDescription":""}',
>> srcImgUUID=u'00000000-0000-0000-0000-000000000000',
>> srcVolUUID=u'00000000-0000-0000-0000-000000000000', initialSize=None)
>> from=192.168.1.212,56846, flow_id=18c6bd3b-76ab-45f9-b8c7-09c727f44c91,
>> task_id=e7cc67e6-4b61-4bb3-81b1-6bc687ea5ee9 (api:46)
>>
>>
In this bugzilla, that is related to export from NFS to block:
https://bugzilla.redhat.com/show_bug.cgi?id=1358717
it seems to understand, from a comment of Maor, that
preallocate=2 --> sparse
what are instead the possible values of volFormat and their decodification?
In my case it detects
volFormat=4,
Could it be something similar to the bugzilla above? Can it change anything
if exporting with NFS 4.2 or using oVirt 4.2?
I think that if one has big qcow2 disk files that he/she desire to migrate
into oVirt, the process could be optimized
Thanks,
Gianluca
6 years, 11 months
Re: [ovirt-users] Failed deply of ovirt-engine using ovirt-node-ng-installer-ovirt-4.2-pre-2017120512.iso
by Simone Tiraboschi
On Wed, Dec 6, 2017 at 11:38 AM, Roberto Nunin <robnunin(a)gmail.com> wrote:
> Ciao Simone
> thanks for really quick answer.
>
> 2017-12-06 11:05 GMT+01:00 Simone Tiraboschi <stirabos(a)redhat.com>:
>
>> Ciao Roberto,
>>
>> On Wed, Dec 6, 2017 at 10:02 AM, Roberto Nunin <robnunin(a)gmail.com>
>> wrote:
>>
>>> I'm having trouble to deploy one three host hyperconverged lab using
>>> iso-image named above.
>>>
>>
>> Please note that ovirt-node-ng-installer-ovirt-4.2-pre-2017120512
>> <(201)%20712-0512>.iso is still a pre-release software.
>> Your contribute testing it is really appreciated!
>>
>
> It's a pleasure !.
>
>
>>
>>
>>
>>>
>>> My test environment is based on HPE BL680cG7 blade servers.
>>> These servers has 6 physical 10GB network interfaces (flexNIC), each one
>>> with four profiles (ethernet,FCoE,iSCSI,etc).
>>>
>>> I choose one of these six phys interfaces (enp5s0f0) and assigned it a
>>> static IPv4 address, for each node.
>>>
>>> After node reboot, interface ONBOOT param is still set to no.
>>> Changed via iLO interface to yes and restarted network. Fine.
>>>
>>> After gluster setup, with gdeploy script under Cockpit interface,
>>> avoiding errors coming from :
>>> /usr/share/gdepply/scripts/blacklist_all_disks.sh, start hosted-engine
>>> deploy.
>>>
>>> With the new version, I'm having an error never seen before:
>>>
>>> The Engine VM (10.114.60.117) and this host (10.114.60.134/24) will not
>>> be in the same IP subnet. Static routing configuration are not supported on
>>> automatic VM configuration.
>>> Failed to execute stage 'Environment customization': The Engine VM
>>> (10.114.60.117) and this host (10.114.60.134/24) will not be in the
>>> same IP subnet. Static routing configuration are not supported on automatic
>>> VM configuration.
>>> Hosted Engine deployment failed.
>>>
>>> There's no input field for HE subnet mask. Anyway in our class-c ovirt
>>> management network these ARE in the same subnet.
>>> How to recover from this ? I cannot add /24 CIDR in HE Static IP address
>>> field, it isn't allowed.
>>>
>>
>> 10.114.60.117 and 10.114.60.134/24 are in the same IPv4 /24 subnet so it
>> should't fail.
>> The issue here seams different:
>>
>> From hosted-engine-setup log I see that you passed the VM IP address via
>> answerfile:
>> 2017-12-06 09:14:30,195+0100 DEBUG otopi.context
>> context.dumpEnvironment:831 ENV OVEHOSTED_VM/cloudinitVMStatic
>> CIDR=str:'10.114.60.117'
>>
>> while the right syntax should be:
>> OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.114.60.117/24
>>
>> Did you wrote the answerfile by yourself or did you entered the IP
>> address in the cockpit wizard? if so we probably have a regression there.
>>
>
> I've inserted it while providing data for setup, using Cockpit interface.
> Tried to add CIDR (/24), but it isn't allowed from Cockpit web interface.
> No manual update of answer file.
>
>>
>>
>>
>>>
>>> Moreover, VM FQDN is asked two times during the deploy process. It's
>>> correct ?
>>>
>>
>> No, I don't think so but I don't see it from you logs.
>> Could you please explain it?
>>
>
> Yes: first time is requested during initial setup of HE VM deploy
>
> The second one, instead, is asked (at least to me ) in this step, after
> initial setup:
>
So both on cockpit side?
>
> [image: Immagine incorporata 1]
>
>>
>>
>>>
>>> Some additional, general questions:
>>> NetworkManager: must be disabled deploying HCI solution ? In my attempt,
>>> wasn't disabled.
>>>
>>
> Simone, could you confirm or not that NM must stay in place while
> deploying ? This qustion was struggling since 3.6...... what is the "best
> practice" ?
> All of mine RHV environments (3.6 - 4.0.1 - 4.1.6 ) have it disabled, but
> I wasn't able to find any mandatory rule.
>
In early 3.6 you had to disable it but now you can safely keep it on.
>
>
>> There's some document to follow to perform a correct deploy ?
>>> Is this one still "valid" ? : https://ovirt.org/blog/2017/
>>> 04/up-and-running-with-ovirt-4.1-and-gluster-storage/
>>>
>>> Attached hosted-engine-setup log.
>>> TIA
>>>
>>>
>>>
>>> --
>>> Roberto
>>> 110-006-970
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
> Roberto Nunin
>
>
>
>
6 years, 11 months
Re: [ovirt-users] 4-2rc hosted-engine don't boot error:cannot allocate kernel buffer
by Maton, Brett
Really short version ( I can't find the link to the ovirt doc at the moment)
Put hosted engine in to global maintenance and power off the vm
(hosted-engine command).
On one of your physical hosts, make a copy of he config and update the
memory
cp /var/run/ovirt-hosted-engine-ha/vm.conf .
vim vm.conf
Then start hosted engine with the new config
hosted-engine --vm-start --vm-conf=./vm.conf
On 11 December 2017 at 10:36, Roberto Nunin <robnunin(a)gmail.com> wrote:
>
>
> 2017-12-11 10:32 GMT+01:00 Maton, Brett <matonb(a)ltresources.co.uk>:
>
>> Hi Roberto can you check how much RAM is allocated to the HE VM ?
>>
>>
>> virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
>>
>> virsh # dominfo HostedEngine
>>
>>
>> The last update I did seems to have changed the HE RAM from 4GB to 4MB!
>>
>
>
> Yes, you're right......
>
> virsh # dominfo HostedEngine
> Id: 191
> Name: HostedEngine
> UUID: 6831dd96-af48-4673-ac98-f1b9ba60754b
> OS Type: hvm
> State: running
> CPU(s): 4
> CPU time: 9053.7s
> Max memory: 4096 KiB
> Used memory: 4096 KiB
> Persistent: yes
> Autostart: disable
> Managed save: no
> Security model: selinux
> Security DOI: 0
> Security label: system_u:system_r:svirt_t:s0:c201,c408 (enforcing)
>
>
>>
>> On 11 December 2017 at 09:08, Simone Tiraboschi <stirabos(a)redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Mon, Dec 11, 2017 at 9:47 AM, Roberto Nunin <robnunin(a)gmail.com>
>>> wrote:
>>>
>>>> Hello all
>>>>
>>>> during weekend, I've re-tried to deploy my 4.2_rc lab.
>>>> Everything was fine, apart the fact host 2 and 3 weren't imported. I
>>>> had to add them to the cluster manually, with the NEW function.
>>>> After this Gluster volumes were added fine to the environment.
>>>>
>>>> Next engine deploy on nodes 2 and 3, ended with ok status.
>>>>
>>>> Tring to migrate HE from host 1 to host 2 was fine, the same from host
>>>> 2 to host 3.
>>>>
>>>> After these two attempts, no way to migrate HE back to any host.
>>>> Tried Maintenance mode set to global, reboot the HE and now I'm in the
>>>> same condition reported below, not anymore able to boot the HE.
>>>>
>>>> Here's hosted-engine --vm-status:
>>>>
>>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>>
>>>>
>>>>
>>>> --== Host 1 status ==--
>>>>
>>>> conf_on_shared_storage : True
>>>> Status up-to-date : True
>>>> Hostname : aps-te61-mng.example.com
>>>> Host ID : 1
>>>> Engine status : {"reason": "vm not running on this
>>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>>> Score : 3400
>>>> stopped : False
>>>> Local maintenance : False
>>>> crc32 : 7dfc420b
>>>> local_conf_timestamp : 181953
>>>> Host timestamp : 181952
>>>> Extra metadata (valid at timestamp):
>>>> metadata_parse_version=1
>>>> metadata_feature_version=1
>>>> timestamp=181952 (Mon Dec 11 09:21:46 2017)
>>>> host-id=1
>>>> score=3400
>>>> vm_conf_refresh_time=181953 (Mon Dec 11 09:21:47 2017)
>>>> conf_on_shared_storage=True
>>>> maintenance=False
>>>> state=GlobalMaintenance
>>>> stopped=False
>>>>
>>>>
>>>> --== Host 2 status ==--
>>>>
>>>> conf_on_shared_storage : True
>>>> Status up-to-date : True
>>>> Hostname : aps-te64-mng.example.com
>>>> Host ID : 2
>>>> Engine status : {"reason": "vm not running on this
>>>> host", "health": "bad", "vm": "down", "detail": "unknown"}
>>>> Score : 3400
>>>> stopped : False
>>>> Local maintenance : False
>>>> crc32 : 67c7dd1d
>>>> local_conf_timestamp : 181946
>>>> Host timestamp : 181946
>>>> Extra metadata (valid at timestamp):
>>>> metadata_parse_version=1
>>>> metadata_feature_version=1
>>>> timestamp=181946 (Mon Dec 11 09:21:49 2017)
>>>> host-id=2
>>>> score=3400
>>>> vm_conf_refresh_time=181946 (Mon Dec 11 09:21:49 2017)
>>>> conf_on_shared_storage=True
>>>> maintenance=False
>>>> state=GlobalMaintenance
>>>> stopped=False
>>>>
>>>>
>>>> --== Host 3 status ==--
>>>>
>>>> conf_on_shared_storage : True
>>>> Status up-to-date : True
>>>> Hostname : aps-te68-mng.example.com
>>>> Host ID : 3
>>>> Engine status : {"reason": "failed liveliness
>>>> check", "health": "bad", "vm": "up", "detail": "Up"}
>>>> Score : 3400
>>>> stopped : False
>>>> Local maintenance : False
>>>> crc32 : 4daea041
>>>> local_conf_timestamp : 181078
>>>> Host timestamp : 181078
>>>> Extra metadata (valid at timestamp):
>>>> metadata_parse_version=1
>>>> metadata_feature_version=1
>>>> timestamp=181078 (Mon Dec 11 09:21:53 2017)
>>>> host-id=3
>>>> score=3400
>>>> vm_conf_refresh_time=181078 (Mon Dec 11 09:21:53 2017)
>>>> conf_on_shared_storage=True
>>>> maintenance=False
>>>> state=GlobalMaintenance
>>>> stopped=False
>>>>
>>>>
>>>> !! Cluster is in GLOBAL MAINTENANCE mode !!
>>>>
>>>> (it is in global maintenance to avoid messages to be sent to admin
>>>> mailbox).
>>>>
>>>
>>> As soon as you exit the global maintenance mode, one of the hosts should
>>> take care of automatically restarting the engine VM within a couple of
>>> minutes.
>>>
>>> If you want to manually start the engine VM over a specific host while
>>> in maintenance mode you can use:
>>> hosted-engine --vm-start
>>> on the specific host
>>>
>>>
>>>>
>>>> Engine image is available on all three hosts, gluster is working fine:
>>>>
>>>> Volume Name: engine
>>>> Type: Replicate
>>>> Volume ID: 95355a0b-1f45-4329-95c7-604682e812d0
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x 3 = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: aps-te61-mng.example.com:/gluster_bricks/engine/engine
>>>> Brick2: aps-te64-mng.example.com:/gluster_bricks/engine/engine
>>>> Brick3: aps-te68-mng.example.com:/gluster_bricks/engine/engine
>>>> Options Reconfigured:
>>>> nfs.disable: on
>>>> transport.address-family: inet
>>>> performance.quick-read: off
>>>> performance.read-ahead: off
>>>> performance.io-cache: off
>>>> performance.low-prio-threads: 32
>>>> network.remote-dio: off
>>>> cluster.eager-lock: enable
>>>> cluster.quorum-type: auto
>>>> cluster.server-quorum-type: server
>>>> cluster.data-self-heal-algorithm: full
>>>> cluster.locking-scheme: granular
>>>> cluster.shd-max-threads: 8
>>>> cluster.shd-wait-qlength: 10000
>>>> features.shard: on
>>>> user.cifs: off
>>>> storage.owner-uid: 36
>>>> storage.owner-gid: 36
>>>> network.ping-timeout: 30
>>>> performance.strict-o-direct: on
>>>> cluster.granular-entry-heal: enable
>>>> features.shard-block-size: 64MB
>>>>
>>>> Engine qemu image seems to be ok:
>>>>
>>>> [root@aps-te68-mng 9998de26-8a5c-4495-b450-c8dfc1e016da]# l
>>>> total 2660796
>>>> -rw-rw----. 1 vdsm kvm 53687091200 Dec 8 17:22
>>>> 35ac0f88-e97d-4710-a385-127c751a3190
>>>> -rw-rw----. 1 vdsm kvm 1048576 Dec 11 09:04
>>>> 35ac0f88-e97d-4710-a385-127c751a3190.lease
>>>> -rw-r--r--. 1 vdsm kvm 285 Dec 8 11:19
>>>> 35ac0f88-e97d-4710-a385-127c751a3190.meta
>>>> [root@aps-te68-mng 9998de26-8a5c-4495-b450-c8dfc1e016da]# qemu-img
>>>> info 35ac0f88-e97d-4710-a385-127c751a3190
>>>> image: 35ac0f88-e97d-4710-a385-127c751a3190
>>>> file format: raw
>>>> virtual size: 50G (53687091200 bytes)
>>>> disk size: 2.5G
>>>> [root@aps-te68-mng 9998de26-8a5c-4495-b450-c8dfc1e016da]#
>>>>
>>>> Attached agent and broker logs from the last host where HE startup was
>>>> attempted. Only last two hours.
>>>>
>>>> Any hint to perform further investigation ?
>>>>
>>>> Thanks in advance.
>>>>
>>>> Environment: 3 HPE Proliant BL680cG7, OS on mirrored volume 1, gluster
>>>> on mirrored volume 2, 1 TB for each server.
>>>> Multiple network adapters (6), configured only one.
>>>> I've used last ovirt-node-ng iso image : ovirt-node-ng-installer-ovirt-
>>>> 4.2-pre-2017120512 <(201)%20712-0512>.iso and
>>>> ovirt-hosted-engine-setup-2.2.1-0.0.master.20171206123553.git94f4c9e.el7.centos.noarch
>>>> to work around HE static ip address not masked correctly.
>>>>
>>>>
>>>> --
>>>> Roberto
>>>>
>>>> 2017-12-07 12:44 GMT+01:00 Roberto Nunin <robnunin(a)gmail.com>:
>>>>
>>>>> Hi
>>>>>
>>>>> after successfully deployed fresh 4.2_rc with ovirt node, I'm facing
>>>>> with a blocking problem.
>>>>>
>>>>> hosted-engine won't boot. Reaching the console via vnc hook, I can see
>>>>> that it is at initial boot screen, but for any OS release available, I
>>>>> receive:
>>>>>
>>>>> [image: Immagine incorporata 1]
>>>>> then
>>>>> [image: Immagine incorporata 2]
>>>>>
>>>>> googling around, I'm not able to find suggestions. Any hints ?
>>>>>
>>>>> Thanks
>>>>>
>>>>> --
>>>>> Roberto
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
>
> --
> Roberto Nunin
>
>
>
6 years, 11 months
4-2rc hosted-engine don't boot error:cannot allocate kernel buffer
by Roberto Nunin
Hi
after successfully deployed fresh 4.2_rc with ovirt node, I'm facing with a
blocking problem.
hosted-engine won't boot. Reaching the console via vnc hook, I can see that
it is at initial boot screen, but for any OS release available, I receive:
[image: Immagine incorporata 1]
then
[image: Immagine incorporata 2]
googling around, I'm not able to find suggestions. Any hints ?
Thanks
--
Roberto
6 years, 11 months
Best Practice Question: How many engines, one or more than one, for multiple physical locations
by Matt Simonsen
Hello all,
I read with Gluster using hyper-convergence that the engine must reside
on the same LAN as the nodes. I guess this makes sense by definition -
ie: using Gluster storage and replicating Gluster bricks across the web
sounds awful.
This got me wondering about best practices for the engine setup. We have
multiple physical locations (co-location data centers).
In my initial plan I had expected to have my oVirt engine hosted
separately from each physical location so that in the event of trouble
at a remote facility the engine would still be usable.
In this case, our prod sites would not have a "hyper-converged" setup if
we decide to run GlusterFS for storage at any particular physical site,
but I believe it would still be possible to use Gluster. In this case
oVirt would have a 3 node cluster, using GlusterFS storage, but not
hyper-converged since the engine would be in a separate facility.
Is there any downside in this setup to having the engine off-site?
Rather than having an off-site engine, should I consider one engine per
physical co-location space?
Thank you all for any feedback,
Matt
6 years, 11 months
Scheduling daily Snapshot
by Jason Lelievre
Hello,
What is the best way to set up a daily live snapshot for all VM, and have
the possibility to recover, for example, a specific VM to a specific day?
I use a Hyperconverged Infrastructure with 3 nodes, gluster storage.
Thank you,
6 years, 11 months