Don't think so. AFAIK, but this might have changed in 4.x. But usually
to float "stuff" between datacenters, you have to export to an export
domain and then do an import.
1. Attach non-attached export domain
2. Export VMs (must include templates if from templates)
3. Detach export domain.
4. On new datacenter, attach export domain
5. Import VMs
6. Detach export domain.
There might be come sort of unsupported way to take a Storage Domain
into a different datacenter, I'm just not aware of such. Thus, the idea
of a "replicate" would have to be a full "replicate" in a failover
style
scenario where you can't really tell the difference between the two (the
storage domain works on the replicate because it can't tell the
difference). Thus, only one could ever be "up" at any time. Even so, I
haven't tried to do this. If the metadata contains something so
intrinsic to the the hypervisor cluster that it can't be effectively
replicated, obviously this isn't going to work without some sort of
assist that can massage/convert the metadata.
IMHO (again, I'm not up on any 4.x stuff that might have made this
easier/possible)
On 11/13/18 5:01 PM, Jacob Green wrote:
> Ok I found my answer in the *engine.log *"Domain
> '80dcf277-9958-4368-b7dd-2a5d5d29b3ec' is already attached to a
> different storage pool, clean the storage domain metadata."
>
> So I am assuming in a real disaster recovery scenario the Ansible stuff
> is doing some magic there.
>
> However, I would like to know if I properly detach a Fiber Channel
> storage domain, is it simple to import it to a new environment. Any
> guidance before I undertake such a task?
>
>
> On 11/13/2018 04:34 PM, Jacob Green wrote:
>>
>> Ok I hear what your saying and thank you for your input! By the way
>> this is a test environment I am working with so I can learn, so there
>> is no risk for dual brainy-ness. Also I figured out where I was going
>> wrong earlier on site B, I was clicking "New Domain" instead of
>> "Import Domain" So I was able to import the replicated iSCSI domain,
>> however now when I try to attach the storage domain, I get the
>> following in the even log.
>>
>> "VDSM jake-ovirt2 command CleanStorageDomainMetaDataVDS failed: Cannot
>> acquire host id: ('80dcf277-9958-4368-b7dd-2a5d5d29b3ec',
>> SanlockException(5, 'Sanlock lockspace add failure', 'Input/output
>> error'))"
>>
>> It is my opinion that my problem now is that since the data domain was
>> not properly detached before replication, that my replicated storage
>> will not attach because of some "lock" on the storage domain?Is that
>> what this error means or am I missing the mark completely?
>>
>>
>>
>> On 11/13/2018 02:41 PM, Christopher Cox wrote:
>>> Normally a "replicate" or RAID 1 style scenario is handled by a SAN
>>> frontend (like IBM's SVC) or some other mirroring mechanism that
>>> presents an abstracted mirrored LUN as a Storage Domain to oVirt.
>>>
>>> So, the answer lies with your storage supplier and/or SAN abstractor.
>>>
>>> With that said, reading your full text, this would still be likely
>>> for a failover scenario and not a "live/live" scenario. A
"live/live"
>>> scenarios risks the problem inherent to two things being "split
>>> brained". And this is usually very bad for non-cluster aware storage
>>> (and the complexity of cluster aware storage could be great in this
>>> case).
>>>
>>>
>>> On 11/13/18 2:24 PM, Jacob Green wrote:
>>>> So I was testing with two Identical ovirt environments running the
>>>> latest 4.2 environment. I have iscsi storage set up at Site A, and I
>>>> have that same storage replicated to Side B, before I get into
>>>> learning disaster recovery I wanted to see what importing the
>>>> replicated storage would look like. However when I import it I get
>>>> the follow, and the vms are wiped from the replicated storage I
>>>> presented with iscsi.
>>>>
>>>>
>>>> So is this possible with iscsi? Is there another way to go about
>>>> doing this?
>>>>
>>>> My iscsi solution is freenas.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Jacob Green
>>>>
>>>> Systems Admin
>>>>
>>>> American Alloy Steel
>>>>
>>>> 713-300-5690
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LUXGTSTEFRW...
>>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/553A54IW23C...
>>
>> --
>> Jacob Green
>>
>> Systems Admin
>>
>> American Alloy Steel
>>
>> 713-300-5690
>>
>>
>> _______________________________________________
>> Users mailing list --users(a)ovirt.org
>> To unsubscribe send an email tousers-leave(a)ovirt.org
>> Privacy
Statement:https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of
Conduct:https://www.ovirt.org/community/about/community-guidelines/
>> List
Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/UF...
>
> --
> Jacob Green
>
> Systems Admin
>
> American Alloy Steel
>
> 713-300-5690
>