I ended up removing the gluster volumes and then the cluster.  I was a little frustrated with why I couldn't get it to work and made a hasty decision.  Thank you very much for your response, but I am basically going to have to re-build at this point.


On Wed, Jun 26, 2013 at 7:48 AM, Itamar Heim <iheim@redhat.com> wrote:
On 06/26/2013 03:41 PM, Tony Feldmann wrote:
I am not sure how to retrieve that info.


psql engine postgres -c "select * from vds_static;"
psql engine postgres -c "select * from vds_spm_id_map;"



On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <iheim@redhat.com
<mailto:iheim@redhat.com>> wrote:

    On 06/26/2013 02:59 PM, Tony Feldmann wrote:

        yes.


    what's the content of this table?
    vds_spm_id_map

    (also of vds_static)

    thanks



        On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim@redhat.com
        <mailto:iheim@redhat.com>
        <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>> wrote:

             On 06/26/2013 02:55 PM, Tony Feldmann wrote:

                 Ii won't let me move the hosts as they have gluster
        volumes in
                 that cluster.


             so the cluster is both virt and gluster?



                 On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim
        <iheim@redhat.com <mailto:iheim@redhat.com>
                 <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>
                 <mailto:iheim@redhat.com <mailto:iheim@redhat.com>
        <mailto:iheim@redhat.com <mailto:iheim@redhat.com>>>> wrote:

                      On 06/26/2013 04:35 AM, Tony Feldmann wrote:

                          I was messing around with some things and
        force removed
                 my DC.  My
                          cluster is still there with the 2 gluster volumes,
                 however I
                          cannot move
                          that cluster into a new dc, I just get the
        following
                 error in
                          engine.log:

                          2013-06-25 20:26:15,218 ERROR

          [org.ovirt.engine.core.bll.______AddVdsSpmIdCommand]
                          (ajp--127.0.0.1-8702-2)
                          [7d1289a6] Command

          org.ovirt.engine.core.bll.______AddVdsSpmIdCommand throw
                          exception:
                 org.springframework.dao.______DuplicateKeyException:



                          CallableStatementCallback; SQL [{call
                 insertvds_spm_id_map(?, ?,
                          ?)}];
                          ERROR: duplicate key value violates unique
        constraint
                          "pk_vds_spm_id_map"
                              Detail: Key (storage_pool_id,

          vds_spm_id)=(084def30-1e19-______4777-9251-8eb1f7569b53,


                 1) already

                          exists.
                              Where: SQL statement "INSERT INTO
                          vds_spm_id_map(storage_pool_______id,



                          vds_id, vds_spm_id)
                                VALUES(v_storage_pool_id, v_vds_id,
        v_vds_spm_id)"


                          I would really like to get this back into a dc
        without
                 destroying my
                          gluster volumes and losing my data.  Can
        anyone please
                 point me
                          in the
                          right direction?


                      but if you removed the DC, moving the cluster is
                 meaningless - you
                      can just create a new cluster and move the hosts
        to it?
                      (the VMs reside in the DC storage domains, not in
        the cluster)

                      the above error message looks familiar - i think
        there was
                 a bug
                      fixed for it a while back


                          On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
                          <trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com>>
                 <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com> <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com>>>
                          <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com>
                 <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com>> <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com>
                 <mailto:trfeldmann@gmail.com
        <mailto:trfeldmann@gmail.com>>>__>__> wrote:

                               I have a 2 node cluster with engine
        running on one
                 of the
                          nodes.  It
                               has 2 gluster volumes that replicate
        between the
                 hosts as
                          its shared
                               storage.  Last night one of my systems
        crashed.
                   It looks
                          like all
                               of my data is present, however the ids
        file seems
                 to be
                          corrupt on
                               my master domain.  I tried to do a
        hexdump -c on
                 the ids
                          file, but
                               it just gave an input/output error.
          Sanlock.log shows
                          error -5.  Is
                               there a way to rebuild the ids file, or
        can I tell
                 ovirt to
                          use the
                               other domain as the master so I can get
        back up
                 and running?





          _____________________________________________________

                          Users mailing list
        Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org
        <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org
        <mailto:Users@ovirt.org>
                 <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
        http://lists.ovirt.org/______mailman/listinfo/users
        <http://lists.ovirt.org/____mailman/listinfo/users>