[Users] ids sanlock error

Itamar Heim iheim at redhat.com
Wed Jun 26 08:00:18 EDT 2013


On 06/26/2013 02:59 PM, Tony Feldmann wrote:
> yes.

what's the content of this table?
vds_spm_id_map

(also of vds_static)

thanks

>
>
> On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim <iheim at redhat.com
> <mailto:iheim at redhat.com>> wrote:
>
>     On 06/26/2013 02:55 PM, Tony Feldmann wrote:
>
>         Ii won't let me move the hosts as they have gluster volumes in
>         that cluster.
>
>
>     so the cluster is both virt and gluster?
>
>
>
>         On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim <iheim at redhat.com
>         <mailto:iheim at redhat.com>
>         <mailto:iheim at redhat.com <mailto:iheim at redhat.com>>> wrote:
>
>              On 06/26/2013 04:35 AM, Tony Feldmann wrote:
>
>                  I was messing around with some things and force removed
>         my DC.  My
>                  cluster is still there with the 2 gluster volumes,
>         however I
>                  cannot move
>                  that cluster into a new dc, I just get the following
>         error in
>                  engine.log:
>
>                  2013-06-25 20:26:15,218 ERROR
>                  [org.ovirt.engine.core.bll.____AddVdsSpmIdCommand]
>                  (ajp--127.0.0.1-8702-2)
>                  [7d1289a6] Command
>                  org.ovirt.engine.core.bll.____AddVdsSpmIdCommand throw
>                  exception:
>         org.springframework.dao.____DuplicateKeyException:
>
>                  CallableStatementCallback; SQL [{call
>         insertvds_spm_id_map(?, ?,
>                  ?)}];
>                  ERROR: duplicate key value violates unique constraint
>                  "pk_vds_spm_id_map"
>                      Detail: Key (storage_pool_id,
>                  vds_spm_id)=(084def30-1e19-____4777-9251-8eb1f7569b53,
>         1) already
>
>                  exists.
>                      Where: SQL statement "INSERT INTO
>                  vds_spm_id_map(storage_pool_____id,
>
>                  vds_id, vds_spm_id)
>                        VALUES(v_storage_pool_id, v_vds_id, v_vds_spm_id)"
>
>
>                  I would really like to get this back into a dc without
>         destroying my
>                  gluster volumes and losing my data.  Can anyone please
>         point me
>                  in the
>                  right direction?
>
>
>              but if you removed the DC, moving the cluster is
>         meaningless - you
>              can just create a new cluster and move the hosts to it?
>              (the VMs reside in the DC storage domains, not in the cluster)
>
>              the above error message looks familiar - i think there was
>         a bug
>              fixed for it a while back
>
>
>                  On Tue, Jun 25, 2013 at 8:19 AM, Tony Feldmann
>                  <trfeldmann at gmail.com <mailto:trfeldmann at gmail.com>
>         <mailto:trfeldmann at gmail.com <mailto:trfeldmann at gmail.com>>
>                  <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com> <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>>__> wrote:
>
>                       I have a 2 node cluster with engine running on one
>         of the
>                  nodes.  It
>                       has 2 gluster volumes that replicate between the
>         hosts as
>                  its shared
>                       storage.  Last night one of my systems crashed.
>           It looks
>                  like all
>                       of my data is present, however the ids file seems
>         to be
>                  corrupt on
>                       my master domain.  I tried to do a hexdump -c on
>         the ids
>                  file, but
>                       it just gave an input/output error.  Sanlock.log shows
>                  error -5.  Is
>                       there a way to rebuild the ids file, or can I tell
>         ovirt to
>                  use the
>                       other domain as the master so I can get back up
>         and running?
>
>
>
>
>                  ___________________________________________________
>                  Users mailing list
>         Users at ovirt.org <mailto:Users at ovirt.org> <mailto:Users at ovirt.org
>         <mailto:Users at ovirt.org>>
>         http://lists.ovirt.org/____mailman/listinfo/users
>         <http://lists.ovirt.org/__mailman/listinfo/users>
>                  <http://lists.ovirt.org/__mailman/listinfo/users
>         <http://lists.ovirt.org/mailman/listinfo/users>>
>
>
>
>
>



More information about the Users mailing list