[Users] ids sanlock error

Matthew Curry matt at mattcurry.com
Fri Jun 28 18:12:24 UTC 2013


Hi, all...

     I have a bit a related question.... and I am trying to decide the 
best path.  Currently we have 3.1 in production and an entirely new 
Setup in our internal DC that is waiting to be built out.

Should I start with 3.2 to begin with?  Or 3.1 then migrate to 3.2 to 
gain the experience?  I might add we are running CentOS 6.3. Which has a 
few difference.  On top of all that; there is the 3.3rc's to be 
considered. But we are in a fully functional, and heavily used 
production environment.  All help is appreciate.  I am also happy to 
share any of the things I have learned in this lengthy endeavor.

Feel free to ask; or email me direct.
MattCurry at linux.com

Thanks to all,
Matt






On 6/26/13 8:10 AM, Tony Feldmann wrote:
> I ended up removing the gluster volumes and then the cluster.  I was a 
> little frustrated with why I couldn't get it to work and made a hasty 
> decision.  Thank you very much for your response, but I am basically 
> going to have to re-build at this point.
>
>
> On Wed, Jun 26, 2013 at 7:48 AM, Itamar Heim <iheim at redhat.com 
> <mailto:iheim at redhat.com>> wrote:
>
>     On 06/26/2013 03:41 PM, Tony Feldmann wrote:
>
>         I am not sure how to retrieve that info.
>
>
>     psql engine postgres -c "select * from vds_static;"
>     psql engine postgres -c "select * from vds_spm_id_map;"
>
>
>
>         On Wed, Jun 26, 2013 at 7:00 AM, Itamar Heim <iheim at redhat.com
>         <mailto:iheim at redhat.com>
>         <mailto:iheim at redhat.com <mailto:iheim at redhat.com>>> wrote:
>
>             On 06/26/2013 02:59 PM, Tony Feldmann wrote:
>
>                 yes.
>
>
>             what's the content of this table?
>             vds_spm_id_map
>
>             (also of vds_static)
>
>             thanks
>
>
>
>                 On Wed, Jun 26, 2013 at 6:57 AM, Itamar Heim
>         <iheim at redhat.com <mailto:iheim at redhat.com>
>                 <mailto:iheim at redhat.com <mailto:iheim at redhat.com>>
>                 <mailto:iheim at redhat.com <mailto:iheim at redhat.com>
>         <mailto:iheim at redhat.com <mailto:iheim at redhat.com>>>> wrote:
>
>                      On 06/26/2013 02:55 PM, Tony Feldmann wrote:
>
>                          Ii won't let me move the hosts as they have
>         gluster
>                 volumes in
>                          that cluster.
>
>
>                      so the cluster is both virt and gluster?
>
>
>
>                          On Wed, Jun 26, 2013 at 6:49 AM, Itamar Heim
>                 <iheim at redhat.com <mailto:iheim at redhat.com>
>         <mailto:iheim at redhat.com <mailto:iheim at redhat.com>>
>                          <mailto:iheim at redhat.com
>         <mailto:iheim at redhat.com> <mailto:iheim at redhat.com
>         <mailto:iheim at redhat.com>>>
>                          <mailto:iheim at redhat.com
>         <mailto:iheim at redhat.com> <mailto:iheim at redhat.com
>         <mailto:iheim at redhat.com>>
>                 <mailto:iheim at redhat.com <mailto:iheim at redhat.com>
>         <mailto:iheim at redhat.com <mailto:iheim at redhat.com>>>>> wrote:
>
>                               On 06/26/2013 04:35 AM, Tony Feldmann wrote:
>
>                                   I was messing around with some
>         things and
>                 force removed
>                          my DC.  My
>                                   cluster is still there with the 2
>         gluster volumes,
>                          however I
>                                   cannot move
>                                   that cluster into a new dc, I just
>         get the
>                 following
>                          error in
>                                   engine.log:
>
>                                   2013-06-25 20:26:15,218 ERROR
>
>                   [org.ovirt.engine.core.bll.______AddVdsSpmIdCommand]
>                                   (ajp--127.0.0.1-8702-2)
>                                   [7d1289a6] Command
>
>                   org.ovirt.engine.core.bll.______AddVdsSpmIdCommand throw
>                                   exception:
>                        
>          org.springframework.dao.______DuplicateKeyException:
>
>
>
>                                   CallableStatementCallback; SQL [{call
>                          insertvds_spm_id_map(?, ?,
>                                   ?)}];
>                                   ERROR: duplicate key value violates
>         unique
>                 constraint
>                                   "pk_vds_spm_id_map"
>                                       Detail: Key (storage_pool_id,
>
>                  
>         vds_spm_id)=(084def30-1e19-______4777-9251-8eb1f7569b53,
>
>
>                          1) already
>
>                                   exists.
>                                       Where: SQL statement "INSERT INTO
>                                   vds_spm_id_map(storage_pool_______id,
>
>
>
>                                   vds_id, vds_spm_id)
>         VALUES(v_storage_pool_id, v_vds_id,
>                 v_vds_spm_id)"
>
>
>                                   I would really like to get this back
>         into a dc
>                 without
>                          destroying my
>                                   gluster volumes and losing my data.  Can
>                 anyone please
>                          point me
>                                   in the
>                                   right direction?
>
>
>                               but if you removed the DC, moving the
>         cluster is
>                          meaningless - you
>                               can just create a new cluster and move
>         the hosts
>                 to it?
>                               (the VMs reside in the DC storage
>         domains, not in
>                 the cluster)
>
>                               the above error message looks familiar -
>         i think
>                 there was
>                          a bug
>                               fixed for it a while back
>
>
>                                   On Tue, Jun 25, 2013 at 8:19 AM,
>         Tony Feldmann
>                                   <trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>> <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>>
>                          <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>> <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>>>
>                                   <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>
>                          <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>> <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>
>                          <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>
>                 <mailto:trfeldmann at gmail.com
>         <mailto:trfeldmann at gmail.com>>>>__>__> wrote:
>
>                                        I have a 2 node cluster with engine
>                 running on one
>                          of the
>                                   nodes.  It
>                                        has 2 gluster volumes that
>         replicate
>                 between the
>                          hosts as
>                                   its shared
>                                        storage.  Last night one of my
>         systems
>                 crashed.
>                            It looks
>                                   like all
>                                        of my data is present, however
>         the ids
>                 file seems
>                          to be
>                                   corrupt on
>                                        my master domain.  I tried to do a
>                 hexdump -c on
>                          the ids
>                                   file, but
>                                        it just gave an input/output error.
>                   Sanlock.log shows
>                                   error -5.  Is
>                                        there a way to rebuild the ids
>         file, or
>                 can I tell
>                          ovirt to
>                                   use the
>                                        other domain as the master so I
>         can get
>                 back up
>                          and running?
>
>
>
>
>
>                   _____________________________________________________
>
>                                   Users mailing list
>         Users at ovirt.org <mailto:Users at ovirt.org>
>         <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>         <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
>                 <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>>
>         <mailto:Users at ovirt.org <mailto:Users at ovirt.org>
>                 <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>                          <mailto:Users at ovirt.org
>         <mailto:Users at ovirt.org> <mailto:Users at ovirt.org
>         <mailto:Users at ovirt.org>>>>
>         http://lists.ovirt.org/______mailman/listinfo/users
>                 <http://lists.ovirt.org/____mailman/listinfo/users>
>
>                        
>          <http://lists.ovirt.org/____mailman/listinfo/users
>                 <http://lists.ovirt.org/__mailman/listinfo/users>>
>
>                   <http://lists.ovirt.org/____mailman/listinfo/users
>                 <http://lists.ovirt.org/__mailman/listinfo/users>
>                          <http://lists.ovirt.org/__mailman/listinfo/users
>                 <http://lists.ovirt.org/mailman/listinfo/users>>>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130628/751032ab/attachment-0001.html>


More information about the Users mailing list