[Users] ids sanlock error

I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?

I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log: 2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.AddVdsSpmIdCommand throw exception: org.springframework.dao.DuplicateKeyException: CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-4777-9251-8eb1f7569b53, 1) already exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_id, vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)" I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction? On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?

On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.AddVdsSpmIdCommand throw exception: org.springframework.dao.DuplicateKeyException: CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-4777-9251-8eb1f7569b53, 1) already exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_id, vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster) the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Ii won't let me move the hosts as they have gluster volumes in that cluster. On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.**AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.**AddVdsSpmIdCommand throw exception: org.springframework.dao.**DuplicateKeyException: CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-**4777-9251-8eb1f7569b53, 1) already exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_**id, vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.__AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.__AddVdsSpmIdCommand throw exception: org.springframework.dao.__DuplicateKeyException: CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-__4777-9251-8eb1f7569b53, 1) already exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool___id, vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>

yes. On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com> wrote:
On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.__**AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.__**AddVdsSpmIdCommand throw exception: org.springframework.dao.__**DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-__**4777-9251-8eb1f7569b53, 1) already
exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool___**id,
vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>**> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 06/26/2013 02:59 PM, Tony Feldmann wrote:
yes.
what's the content of this table? vds_spm_id_map (also of vds_static) thanks
On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.____AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.____AddVdsSpmIdCommand throw exception: org.springframework.dao.____DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-____4777-9251-8eb1f7569b53, 1) already
exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_____id,
vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>>__> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>

I am not sure how to retrieve that info. On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <iheim@redhat.com> wrote:
On 06/26/2013 02:59 PM, Tony Feldmann wrote:
yes.
what's the content of this table? vds_spm_id_map
(also of vds_static)
thanks
On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR [org.ovirt.engine.core.bll.___**_AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command org.ovirt.engine.core.bll.____**AddVdsSpmIdCommand throw exception: org.springframework.dao.____**DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id, vds_spm_id)=(084def30-1e19-___**_4777-9251-8eb1f7569b53,
1) already
exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool___**__id,
vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>**>__> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
______________________________**_____________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 06/26/2013 03:41 PM, Tony Feldmann wrote:
I am not sure how to retrieve that info.
psql engine postgres -c "select * from vds_static;" psql engine postgres -c "select * from vds_spm_id_map;"
On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 02:59 PM, Tony Feldmann wrote:
yes.
what's the content of this table? vds_spm_id_map
(also of vds_static)
thanks
On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR
[org.ovirt.engine.core.bll.______AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command
org.ovirt.engine.core.bll.______AddVdsSpmIdCommand throw exception: org.springframework.dao.______DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id,
vds_spm_id)=(084def30-1e19-______4777-9251-8eb1f7569b53,
1) already
exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_______id,
vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>>__>__> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
_____________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>

I ended up removing the gluster volumes and then the cluster. I was a little frustrated with why I couldn't get it to work and made a hasty decision. Thank you very much for your response, but I am basically going to have to re-build at this point. On Wed, Jun 26, 2013 at 7:48 AM, Itamar Heim <iheim@redhat.com> wrote:
On 06/26/2013 03:41 PM, Tony Feldmann wrote:
I am not sure how to retrieve that info.
psql engine postgres -c "select * from vds_static;" psql engine postgres -c "select * from vds_spm_id_map;"
On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 02:59 PM, Tony Feldmann wrote:
yes.
what's the content of this table? vds_spm_id_map
(also of vds_static)
thanks
On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR
[org.ovirt.engine.core.bll.___**___AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command
org.ovirt.engine.core.bll.____**__AddVdsSpmIdCommand throw exception: org.springframework.dao.______**DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id,
vds_spm_id)=(084def30-1e19-___**___4777-9251-8eb1f7569b53,
1) already
exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool___**____id,
vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>**> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>**>__>__> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
______________________________**_______________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> http://lists.ovirt.org/______**mailman/listinfo/users<http://lists.ovirt.org/______mailman/listinfo/users> <http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users>
<http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/____**mailman/listinfo/users<http://lists.ovirt.org/____mailman/listinfo/users> <http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

This is a multi-part message in MIME format. --------------050007020807060405090803 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi, all... I have a bit a related question.... and I am trying to decide the best path. Currently we have 3.1 in production and an entirely new Setup in our internal DC that is waiting to be built out. Should I start with 3.2 to begin with? Or 3.1 then migrate to 3.2 to gain the experience? I might add we are running CentOS 6.3. Which has a few difference. On top of all that; there is the 3.3rc's to be considered. But we are in a fully functional, and heavily used production environment. All help is appreciate. I am also happy to share any of the things I have learned in this lengthy endeavor. Feel free to ask; or email me direct. MattCurry@linux.com Thanks to all, Matt On 6/26/13 8:10 AM, Tony Feldmann wrote:
I ended up removing the gluster volumes and then the cluster. I was a little frustrated with why I couldn't get it to work and made a hasty decision. Thank you very much for your response, but I am basically going to have to re-build at this point.
On Wed, Jun 26, 2013 at 7:48 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com>> wrote:
On 06/26/2013 03:41 PM, Tony Feldmann wrote:
I am not sure how to retrieve that info.
psql engine postgres -c "select * from vds_static;" psql engine postgres -c "select * from vds_spm_id_map;"
On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:
On 06/26/2013 02:59 PM, Tony Feldmann wrote:
yes.
what's the content of this table? vds_spm_id_map
(also of vds_static)
thanks
On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:
On 06/26/2013 02:55 PM, Tony Feldmann wrote:
Ii won't let me move the hosts as they have gluster volumes in that cluster.
so the cluster is both virt and gluster?
On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>> <mailto:iheim@redhat.com <mailto:iheim@redhat.com> <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>>> wrote:
On 06/26/2013 04:35 AM, Tony Feldmann wrote:
I was messing around with some things and force removed my DC. My cluster is still there with the 2 gluster volumes, however I cannot move that cluster into a new dc, I just get the following error in engine.log:
2013-06-25 20:26:15,218 ERROR
[org.ovirt.engine.core.bll.______AddVdsSpmIdCommand] (ajp--127.0.0.1-8702-2) [7d1289a6] Command
org.ovirt.engine.core.bll.______AddVdsSpmIdCommand throw exception:
org.springframework.dao.______DuplicateKeyException:
CallableStatementCallback; SQL [{call insertvds_spm_id_map(?, ?, ?)}]; ERROR: duplicate key value violates unique constraint "pk_vds_spm_id_map" Detail: Key (storage_pool_id,
vds_spm_id)=(084def30-1e19-______4777-9251-8eb1f7569b53,
1) already
exists. Where: SQL statement "INSERT INTO vds_spm_id_map(storage_pool_______id,
vds_id, vds_spm_id) VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
I would really like to get this back into a dc without destroying my gluster volumes and losing my data. Can anyone please point me in the right direction?
but if you removed the DC, moving the cluster is meaningless - you can just create a new cluster and move the hosts to it? (the VMs reside in the DC storage domains, not in the cluster)
the above error message looks familiar - i think there was a bug fixed for it a while back
On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann <trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com <mailto:trfeldmann@gmail.com>>>>__>__> wrote:
I have a 2 node cluster with engine running on one of the nodes. It has 2 gluster volumes that replicate between the hosts as its shared storage. Last night one of my systems crashed. It looks like all of my data is present, however the ids file seems to be corrupt on my master domain. I tried to do a hexdump -c on the ids file, but it just gave an input/output error. Sanlock.log shows error -5. Is there a way to rebuild the ids file, or can I tell ovirt to use the other domain as the master so I can get back up and running?
_____________________________________________________
Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>> http://lists.ovirt.org/______mailman/listinfo/users <http://lists.ovirt.org/____mailman/listinfo/users>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>>
<http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------050007020807060405090803 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">Hi, all...<br> <br> I have a bit a related question.... and I am trying to decide the best path. Currently we have 3.1 in production and an entirely new Setup in our internal DC that is waiting to be built out.<br> <br> Should I start with 3.2 to begin with? Or 3.1 then migrate to 3.2 to gain the experience? I might add we are running CentOS 6.3. Which has a few difference. On top of all that; there is the 3.3rc's to be considered. But we are in a fully functional, and heavily used production environment. All help is appreciate. I am also happy to share any of the things I have learned in this lengthy endeavor.<br> <br> Feel free to ask; or email me direct.<br> <a class="moz-txt-link-abbreviated" href="mailto:MattCurry@linux.com">MattCurry@linux.com</a><br> <br> Thanks to all,<br> Matt<br> <br> <br> <br> <br> <br> <br> On 6/26/13 8:10 AM, Tony Feldmann wrote:<br> </div> <blockquote cite="mid:CACDygeU48WKgZKNgJgJgu=3Pj-F+Nf8ufDxxC4qHtD4gETnhAA@mail.gmail.com" type="cite"> <div dir="ltr">I ended up removing the gluster volumes and then the cluster. I was a little frustrated with why I couldn't get it to work and made a hasty decision. Thank you very much for your response, but I am basically going to have to re-build at this point.<br> </div> <div class="gmail_extra"><br> <br> <div class="gmail_quote">On Wed, Jun 26, 2013 at 7:48 AM, Itamar Heim <span dir="ltr"><<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div class="im">On 06/26/2013 03:41 PM, Tony Feldmann wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> I am not sure how to retrieve that info.<br> <br> </blockquote> <br> </div> psql engine postgres -c "select * from vds_static;"<br> psql engine postgres -c "select * from vds_spm_id_map;"<br> <br> <br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div class="im"> <br> On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a><br> </div> <div class="im"> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>>> wrote:<br> <br> On 06/26/2013 02:59 PM, Tony Feldmann wrote:<br> <br> yes.<br> <br> <br> what's the content of this table?<br> vds_spm_id_map<br> <br> (also of vds_static)<br> <br> thanks<br> <br> <br> <br> On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>><br> </div> <div class="im"> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>>>> wrote:<br> <br> On 06/26/2013 02:55 PM, Tony Feldmann wrote:<br> <br> Ii won't let me move the hosts as they have gluster<br> volumes in<br> that cluster.<br> <br> <br> so the cluster is both virt and gluster?<br> <br> <br> <br> On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim<br> <<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>><br> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>>><br> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>><br> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a> <mailto:<a moz-do-not-send="true" href="mailto:iheim@redhat.com" target="_blank">iheim@redhat.com</a>>>>> wrote:<br> <br> On 06/26/2013 04:35 AM, Tony Feldmann wrote:<br> <br> I was messing around with some things and<br> force removed<br> my DC. My<br> cluster is still there with the 2 gluster volumes,<br> however I<br> cannot move<br> that cluster into a new dc, I just get the<br> following<br> error in<br> engine.log:<br> <br> 2013-06-25 20:26:15,218 ERROR<br> <br> </div> [org.ovirt.engine.core.bll.______AddVdsSpmIdCommand]<br> (ajp--127.0.0.1-8702-2)<br> [7d1289a6] Command<br> <br> org.ovirt.engine.core.bll.______AddVdsSpmIdCommand throw<br> exception:<br> org.springframework.dao.______DuplicateKeyException: <div class="im"><br> <br> <br> CallableStatementCallback; SQL [{call<br> insertvds_spm_id_map(?, ?,<br> ?)}];<br> ERROR: duplicate key value violates unique<br> constraint<br> "pk_vds_spm_id_map"<br> Detail: Key (storage_pool_id,<br> <br> </div> vds_spm_id)=(084def30-1e19-______4777-9251-8eb1f7569b53, <div class="im"><br> <br> 1) already<br> <br> exists.<br> Where: SQL statement "INSERT INTO<br> </div> vds_spm_id_map(storage_pool_______id, <div> <div class="h5"><br> <br> <br> vds_id, vds_spm_id)<br> VALUES(v_storage_pool_id, v_vds_id,<br> v_vds_spm_id)"<br> <br> <br> I would really like to get this back into a dc<br> without<br> destroying my<br> gluster volumes and losing my data. Can<br> anyone please<br> point me<br> in the<br> right direction?<br> <br> <br> but if you removed the DC, moving the cluster is<br> meaningless - you<br> can just create a new cluster and move the hosts<br> to it?<br> (the VMs reside in the DC storage domains, not in<br> the cluster)<br> <br> the above error message looks familiar - i think<br> there was<br> a bug<br> fixed for it a while back<br> <br> <br> On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann<br> <<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>>><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>>>><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>>> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>><br> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a><br> </div> </div> <div> <div class="h5"> <mailto:<a moz-do-not-send="true" href="mailto:trfeldmann@gmail.com" target="_blank">trfeldmann@gmail.com</a>>>>__>__> wrote:<br> <br> I have a 2 node cluster with engine<br> running on one<br> of the<br> nodes. It<br> has 2 gluster volumes that replicate<br> between the<br> hosts as<br> its shared<br> storage. Last night one of my systems<br> crashed.<br> It looks<br> like all<br> of my data is present, however the ids<br> file seems<br> to be<br> corrupt on<br> my master domain. I tried to do a<br> hexdump -c on<br> the ids<br> file, but<br> it just gave an input/output error.<br> Sanlock.log shows<br> error -5. Is<br> there a way to rebuild the ids file, or<br> can I tell<br> ovirt to<br> use the<br> other domain as the master so I can get<br> back up<br> and running?<br> <br> <br> <br> <br> <br> </div> </div> _____________________________________________________ <div class="im"><br> Users mailing list<br> <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> </div> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>>> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>><br> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>>>><br> <a moz-do-not-send="true" href="http://lists.ovirt.org/______mailman/listinfo/users" target="_blank">http://lists.ovirt.org/______mailman/listinfo/users</a><br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/____mailman/listinfo/users" target="_blank">http://lists.ovirt.org/____mailman/listinfo/users</a>> <div class="im"><br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/____mailman/listinfo/users" target="_blank">http://lists.ovirt.org/____mailman/listinfo/users</a><br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/__mailman/listinfo/users" target="_blank">http://lists.ovirt.org/__mailman/listinfo/users</a>>><br> <br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/____mailman/listinfo/users" target="_blank">http://lists.ovirt.org/____mailman/listinfo/users</a><br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/__mailman/listinfo/users" target="_blank">http://lists.ovirt.org/__mailman/listinfo/users</a>><br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/__mailman/listinfo/users" target="_blank">http://lists.ovirt.org/__mailman/listinfo/users</a><br> <<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>>>><br> <br> <br> <br> <br> <br> <br> <br> </div> </blockquote> <br> </blockquote> </div> <br> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <br> </body> </html> --------------050007020807060405090803--

On 06/28/2013 09:12 PM, Matthew Curry wrote:
Hi, all...
I have a bit a related question.... and I am trying to decide the best path. Currently we have 3.1 in production and an entirely new Setup in our internal DC that is waiting to be built out.
Should I start with 3.2 to begin with? Or 3.1 then migrate to 3.2 to gain the experience? I might add we are running CentOS 6.3. Which has a few difference. On top of all that; there is the 3.3rc's to be considered. But we are in a fully functional, and heavily used production environment. All help is appreciate. I am also happy to share any of the things I have learned in this lengthy endeavor.
Feel free to ask; or email me direct. MattCurry@linux.com
Thanks to all, Matt
(a new topic would have made this more visible i guess) i always recommend larger deployments to maintain another environment where they stage the next version, test the upgrade to it in their environment, etc. if you want to do a rolling upgrade between two environments, it requires exporting/importing your vm's between them. while safer, it is an annoyance. but it is of course much safer. Thanks, Itamar
participants (3)
-
Itamar Heim
-
Matthew Curry
-
Tony Feldmann