Valid approach to backup/DR with Geo Replication?
by Michael Kleinpaste
Has anyone used the Geo Replication to facilitate Backups/DR? I'm looking
to validate my thoughts since Geo Replication is a function in oVirt now.
I'm considering this instead of the snap, clone, export, backup, delete
clone, delete snap method.
Thoughts?
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
7 years, 3 months
Failed to check for available updates on host *** failed with message...
by Manuel Luis Aznar
Hello there,
I am getting this error in the ovirt hosted engine:
2017-09-06 16:45:56,160+01 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-7-thread-1) [4ff48065] EVENT_ID: HOST_AVAILABLE_UPDATES_FAILED(839),
Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: *Failed
to check for available updates on host host1.bajada.es
<http://host1.bajada.es> with message 'SSH authentication to
'root(a)host1.bajada.es <root(a)host1.bajada.es>' failed. Please verify
provided credentials. Make sure key is authorized at host'.*
I am up and running: *oVirt Engine Version: 4.1.1.8-1.el7.centos*.
I suppose that this error is related to SSH public and private keys. Am I
right?¿
Thanks for all in advance
Any help would be appreciated
Manuel Luis Aznar
7 years, 3 months
Re: [ovirt-users] disk attachment to VM
by Benny Zlotnik
Hi,
Look at [1], however there are caveats so be sure to pay close attention to
the warning section.
[1] - https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/localdisk/README
On Tue, Sep 5, 2017 at 4:52 PM, Benny Zlotnik <bzlotnik(a)redhat.com> wrote:
> Hi,
>
> Look at [1], however there are caveats so be sure to pay close attention
> to the warning section.
>
> [1] - https://github.com/oVirt/vdsm/blob/master/vdsm_hooks/
> localdisk/README
>
>
> On Tue, Sep 5, 2017 at 4:40 PM, Erekle Magradze <
> erekle.magradze(a)recogizer.de> wrote:
>
>> Hey Guys,
>> Is there a way to attach an SSD directly to the oVirt VM?
>> Thanks in advance
>> Cheers
>> Erekle
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
7 years, 3 months
cpu, core and thread mappings
by Gianluca Cecchi
Hello,
I was talking with a guy expert in VMware and discussing performance of VMs
in respect of virtual cpus assigned to them in relation with mapping with
the real hw of the hypervisor underneath.
One of the topics was numa usage and its overheads in case of a "too" big
VM, in terms of both number of vcpus and memory amount.
Eg:
suppose host has 2 intel based sockets, with 6 cores and HT enabled and has
96Gb of ram (distributed 48+48 between the 2 processors)
suppose I configure a VM with 16 vcpus (2:4:2): would be the mapping
respected at physical level or only a sort of "hint" for the hypervisor?
Can I say that it would perform better if I configure it 12 vcpus and
mapping 1:6:2, because it can stay all inside one cpu?
And what if I define a VM with 52Gb of ram? Can I say that it would perform
in general better if I try to get it all in one cpu related memory slots
(eg not more than 48Gb in my example)?
Are there any documents going more deeply in these sort of considerations?
Also, if one goes and sizes so that the biggest VM is able to all-stay
inside one cpu-memory, does it make sense to say that it will perform
better in this scenario a cluster composed by 4 nodes, each one with 1
socket and 48Gb of memory instead of a cluster of 2 nodes, each one with 2
sockets and 96Gb of ram?
Hope I have clarified my questions/doubts.
Thanks in advance for any insight,
Gianluca
7 years, 3 months
Re: [ovirt-users] Failed to import the Hosted Engine Storage Domain
by Simone Tiraboschi
On Mon, Sep 4, 2017 at 2:21 PM, Arsène Gschwind <arsene.gschwind(a)unibas.ch>
wrote:
>
>
> On 09/04/2017 02:01 PM, Simone Tiraboschi wrote:
>
>
>
> On Mon, Sep 4, 2017 at 1:55 PM, Arsène Gschwind <arsene.gschwind(a)unibas.ch
> > wrote:
>
>>
>>
>> On 09/04/2017 01:52 PM, Simone Tiraboschi wrote:
>>
>>
>>
>> On Mon, Sep 4, 2017 at 12:23 PM, Arsène Gschwind <
>> arsene.gschwind(a)unibas.ch> wrote:
>>
>>> Hi Simone,
>>>
>>> On 09/04/2017 11:14 AM, Simone Tiraboschi wrote:
>>>
>>>
>>>
>>> On Mon, Sep 4, 2017 at 10:56 AM, Arsène Gschwind <
>>> arsene.gschwind(a)unibas.ch> wrote:
>>>
>>>> Hi Didi,
>>>>
>>>> On 09/04/2017 10:15 AM, Yedidyah Bar David wrote:
>>>>
>>>> On Mon, Sep 4, 2017 at 10:16 AM, Arsène Gschwind<arsene.gschwind(a)unibas.ch> <arsene.gschwind(a)unibas.ch> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> A while ago I had some problem with hosted-engine network which wasn't set
>>>> correctly at deploy time, so I finally decided to redeploy the hosted engine
>>>> in the hope the network will be set correctly this time. I've followed this
>>>> procedure:
>>>>
>>>> Stop all VMs
>>>> Full backup of HE DB and export to safe place
>>>> Cleanup HE storage following https://access.redhat.com/solutions/2121581
>>>> Reboot Hosts
>>>> Re-deploy HE until DB recovery
>>>> Recover DB adding the following param:
>>>> --he-remove-storage-vm Removes the hosted-engine storage
>>>> domain, all its entities and the hosted-engine VM during restore.
>>>> --he-remove-hosts Removes all the hosted-engine hosts
>>>> during restore.
>>>>
>>>> Finalize HE deployment.
>>>>
>>>> Everything did run without errors and I'm able to access Web UI.
>>>>
>>>> But now I don't see my HE VM and its respective Storage Domain, the logs
>>>> says it isn't able to import it. I see all other SD and I'm able to manage
>>>> my VMs as before.
>>>>
>>>> Please find attached engine.log
>>>>
>>>> I think this is your problem:
>>>>
>>>> 2017-09-04 03:26:14,272+02 INFO
>>>> [org.ovirt.engine.core.bll.storage.domain.AddExistingBlockStorageDomainCommand]
>>>> (org.ovirt.thread.pool-6-thread-24) [2383eaa0] There are existing luns
>>>> in the system which are part of VG id
>>>> 'vvIoS2-fZTZ-99Ox-Ltzq-pr8U-SL2z-wbTU8g'
>>>>
>>>> I don't see a VG with this ID, here the IDs I see on the hosts:
>>>>
>>>> VG #PV #LV #SN Attr VSize VFree
>>>> 6b62cc06-fc44-4c38-af6d-bfd9cbe73246 1 10 0 wz--n- 99.62g 14.50g
>>>> b0414c06-d984-4001-a998-fd9a2e79fb83 2 70 0 wz--n- 10.00t 2.31t
>>>> b2e30961-7cff-4cca-83d6-bee3a4f890ee 2 47 0 wz--n- 5.27t 2.50t
>>>>
>>>
>>>
>>> Could you please repeat the command on host adm-kvmh70 ?
>>>
>>> 2017-09-04 09:04:18,163+02 INFO [org.ovirt.engine.core.bll.st
>>> orage.domain.ImportHostedEngineStorageDomainCommand]
>>> (org.ovirt.thread.pool-6-thread-34) [247a3718] Running command:
>>> ImportHostedEngineStorageDomainCommand internal: true.
>>> 2017-09-04 09:04:18,189+02 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-34)
>>> [7d2e6cb2] START, GetVGInfoVDSCommand(HostName = adm-kvmh70,
>>> GetVGInfoVDSCommandParameters:{runAsync='true',
>>> hostId='acbacabb-6c4a-43fd-a1e2-2d7ff2f6f98b',
>>> VGID='vvIoS2-fZTZ-99Ox-Ltzq-pr8U-SL2z-wbTU8g'}), log id: 6693b98a
>>> 2017-09-04 09:04:18,232+02 INFO [org.ovirt.engine.core.vdsbro
>>> ker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-6-thread-34)
>>> [7d2e6cb2] FINISH, GetVGInfoVDSCommand, return:
>>> [LUNs:{id='repl_HostedEngine', physicalVolumeId='kYN8Jj-FBDw-MhxI-XcoZ-w1zH-eQL8-IRIgzO',
>>> volumeGroupId='vvIoS2-fZTZ-99Ox-Ltzq-pr8U-SL2z-wbTU8g',
>>> serial='SHITACHI_OPEN-V_50488888', lunMapping='4', vendorId='HITACHI',
>>> productId='OPEN-V', lunConnections='[]', deviceSize='100', pvSize='0',
>>> peCount='797', peAllocatedCount='681', vendorName='HITACHI',
>>> pathsDictionary='[sdf=true, sdu=true, sdk=true, sdp=true]',
>>> pathsCapacity='[sdf=100, sdu=100, sdk=100, sdp=100]', lunType='FCP',
>>> status='null', diskId='null', diskAlias='null',
>>> storageDomainId='6b62cc06-fc44-4c38-af6d-bfd9cbe73246',
>>> storageDomainName='null', discardMaxSize='268435456',
>>> discardZeroesData='true'}], log id: 6693b98a
>>> 2017-09-04 09:04:18,245+02 INFO [org.ovirt.engine.core.bll.st
>>> orage.domain.AddExistingBlockStorageDomainCommand]
>>> (org.ovirt.thread.pool-6-thread-34) [7d2e6cb2] There are existing luns
>>> in the system which are part of VG id 'vvIoS2-fZTZ-99Ox-Ltzq-pr8U-SL
>>> 2z-wbTU8g'
>>> 2017-09-04 09:04:18,245+02 WARN [org.ovirt.engine.core.bll.st
>>> orage.domain.AddExistingBlockStorageDomainCommand]
>>> (org.ovirt.thread.pool-6-thread-34) [7d2e6cb2] Validation of action
>>> 'AddExistingBlockStorageDomain' failed for user SYSTEM. Reasons:
>>> VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAIL
>>> ED_IMPORT_STORAGE_DOMAIN_EXTERNAL_LUN_DISK_EXIST
>>>
>>> I don't know which command you are talking about, I didn't run any
>>> command since it tries to import the SD automatically.
>>>
>>
>> Sorry, can you please run vgdisplay on adm-kvmh70 ?
>>
>> No problem! here the result:
>>
>> [root@adm-kvmh70 ~]# vgs
>> VG #PV #LV #SN Attr VSize VFree
>> 6b62cc06-fc44-4c38-af6d-bfd9cbe73246 1 10 0 wz--n- 99.62g 14.50g
>> b0414c06-d984-4001-a998-fd9a2e79fb83 2 70 0 wz--n- 10.00t 2.31t
>> b2e30961-7cff-4cca-83d6-bee3a4f890ee 2 47 0 wz--n- 5.27t 2.50t
>> vg_adm-kvmh70 1 3 0 wz--n- 277.90g 218.68g
>>
>>
> OK, and
> vgdisplay 6b62cc06-fc44-4c38-af6d-bfd9cbe73246
>
> [root@adm-kvmh70 ~]# vgdisplay 6b62cc06-fc44-4c38-af6d-bfd9cbe73246
> --- Volume group ---
> VG Name 6b62cc06-fc44-4c38-af6d-bfd9cbe73246
> System ID
> Format lvm2
> Metadata Areas 2
> Metadata Sequence No 23
> VG Access read/write
> VG Status resizable
> MAX LV 0
> Cur LV 10
> Open LV 3
> Max PV 0
> Cur PV 1
> Act PV 1
> VG Size 99.62 GiB
> PE Size 128.00 MiB
> Total PE 797
> Alloc PE / Size 681 / 85.12 GiB
> Free PE / Size 116 / 14.50 GiB
> VG UUID vvIoS2-fZTZ-99Ox-Ltzq-pr8U-SL2z-wbTU8g
>
OK, vvIoS2-fZTZ-99Ox-Ltzq-pr8U-SL2z-wbTU8g is the uuid of the VG that
engine says that it includes the LUN used by the hosted-engine SD as
detected by the engine.
Its name is 6b62cc06-fc44-4c38-af6d-bfd9cbe73246 which is also the uuid of
that storage domain.
Now we need to understand what's there.
Could you please check if an SD with uuid
6b62cc06-fc44-4c38-af6d-bfd9cbe73246 is visible in the engine?
Is adm-kvmh70 involved in hosted-engine? did you redeployed it?
In the past did you tried to manually import the hosted-engine SD or
something similar?
thanks
>
> Thanks
>
> ?
>
>
>>
>>
>>
>>>
>>> Thanks,
>>> Arsene
>>>
>>>
>>>
>>>>
>>>>
>>>> Thanks for any help to resolve that issue.
>>>>
>>>> I guess you can try to remove this disk/lun from the engine and let it retry.
>>>>
>>>> Could let me know how to remove that lun from the engine?
>>>>
>>>> If the only disk is of the hosted-engine, I guess it should have been
>>>> removed by '--he-remove-storage-vm' - if so, please open a bug
>>>> describing your flow in detail. Thanks.
>>>>
>>>> It seems that this option didn't remove the he storage information and
>>>> that this VG is still the old one.
>>>>
>>>> Many thanks for your help
>>>> Rgds,
>>>> Arsene
>>>>
>>>> Best,
>>>>
>>>>
>>>> Arsène
>>>>
>>>> --
>>>>
>>>> Arsène Gschwind
>>>> Fa. Sapify AG im Auftrag der Universität Basel
>>>> IT Services
>>>> Klingelbergstr. 70 | CH-4056 Basel | Switzerland
>>>> Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch
>>>> ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11 <+41%2061%20267%2014%2011>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>> --
>>>>
>>>> *Arsène Gschwind*
>>>> Fa. Sapify AG im Auftrag der Universität Basel
>>>> IT Services
>>>> Klingelbergstr. 70 | CH-4056 Basel | Switzerland
>>>> Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch
>>>>
>>>> ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11
>>>> <+41%2061%20267%2014%2011>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> --
>>>
>>> *Arsène Gschwind*
>>> Fa. Sapify AG im Auftrag der Universität Basel
>>> IT Services
>>> Klingelbergstr. 70 | CH-4056 Basel | Switzerland
>>> Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch
>>>
>>> ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11
>>> <+41%2061%20267%2014%2011>
>>>
>>
>>
>> --
>>
>> *Arsène Gschwind*
>> Fa. Sapify AG im Auftrag der Universität Basel
>> IT Services
>> Klingelbergstr. 70 | CH-4056 Basel | Switzerland
>> Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch
>>
>> ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11
>> <+41%2061%20267%2014%2011>
>>
>
>
> --
>
> *Arsène Gschwind*
> Fa. Sapify AG im Auftrag der Universität Basel
> IT Services
> Klingelbergstr. 70 | CH-4056 Basel | Switzerland
> Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch
> ITS-ServiceDesk: support-its(a)unibas.ch | +41 61 267 14 11
> <+41%2061%20267%2014%2011>
>
7 years, 3 months
Slow booting host - restart loop
by Bernardo Juanicó
Hi everyone,
I installed 2 hosts on a new cluster and the servers take a really long to
boot up (about 8 minutes).
When a host crashes or is powered off the ovirt-manager starts it via power
management, since the servers takes all that time to boot up the
ovirt-manager thinks it failed to start and proceeds to reboot it, several
times before giving up, when the server is finally started (about 20
minutes after the failure)
I changed some engine variables with engine-config trying to set a higher
timeout, but the problem persists.
Any ideas??
Regards,
Bernardo
PGP Key <http://pgp.mit.edu/pks/lookup?op=get&search=0x695E5BCE34263F5B>
Skype: mattraken
7 years, 3 months
VMs going in to non-responding state
by Satheesaran Sundaramoorthi
Hi All,
I have created a converged setup with cluster having both virt and gluster
capability. There are three hosts in this cluster, and this cluster also
has enabled 'native access to gluster domain' which enables VM to use
libgfapi access mechanism.
With this setup, I see VMs created landing up in non-reponding state after
sometime.
I have raised bug[1] for this issue.
Request for help with this issue
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1488863
Advance Thanks.
-- Satheesaran S ( sas )
7 years, 3 months