replace ovirt engine host

Hello List, currently i am testing backup and restore of my ovirt engine host. So far the backup and recovery works. Since i have a nfs ISO_DOMAIN i need to remount the nfs mounts on my vdsm hosts/nodes. This is kinda ugly but okay :) However, my storage information seems to go weird. It looks like the ovirt engine gets confused about the current status. Do i have to restart vdsm after replacing the ovirt engine? When i put connect my original ovirt engine again the cluster status is okay... I test this by unplugging the original ovirt engine host network cable and doing a restore on a connrected 2nd machine. So i guess my question is: How do i replace my ovirt engine smoothly in a production enviorment? Thanks, Mario

anyone? :) Or are you only doing backups, no restore? :-P On Thu, Nov 6, 2014 at 10:08 AM, ml ml <mliebherr99@googlemail.com> wrote:
Hello List,
currently i am testing backup and restore of my ovirt engine host.
So far the backup and recovery works.
Since i have a nfs ISO_DOMAIN i need to remount the nfs mounts on my vdsm hosts/nodes. This is kinda ugly but okay :)
However, my storage information seems to go weird. It looks like the ovirt engine gets confused about the current status.
Do i have to restart vdsm after replacing the ovirt engine?
When i put connect my original ovirt engine again the cluster status is okay...
I test this by unplugging the original ovirt engine host network cable and doing a restore on a connrected 2nd machine.
So i guess my question is: How do i replace my ovirt engine smoothly in a production enviorment?
Thanks, Mario

On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :) By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed. HTH -- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

Hi, We had a consulting partner who did the same for our company. This is his procedure and worked great: How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do: a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do: a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file: host all engine 127.0.0.1/32 trust 2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run: /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service. 2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--_000_594A1C0F4D3E4A4984BC1026D9911258mboxde_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Daniel Helgenberger m box bewegtbild GmbH ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10 www.m-box.de<http://www.m-box.de> www.monkeymen.tv<http://www.monkeymen.tv> Gesch?ftsf?hrer: Martin Retschitzegger / Michaela G?llner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com<mailto:vano= ppen.koen@gmail.com>> wrote: Hi, We had a consulting partner who did the same for our company. This is his p= rocedure and worked great: How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination h= ostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 1= 00%identical. Default setup Run 'engine-setup' on the destination host after installing the packages. U= se the following configuration: 1. Backup existing configuration 2. On the source host, do: You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=3Dbackup --file=3D~/ovirt-engine-source --log=3Dbackup.l= og a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=3Dbackup --scope=3Ddb --db-user=3Dengine --= db-password=3DXXX --file=3D/usr/tmp/rhevm-backups/engine-backup --log=3D/tm= p/engine-backup.log c. ./engine-backup.sh --mode=3Dbackup --scope=3Ddwhdb --db-user=3Dengine= --db-password=3DXXX --file=3D/usr/tmp/rhevm-backups/dwh-backup --log=3D/tm= p/engine-backup.log d. ./engine-backup.sh --mode=3Dbackup --scope=3Dreportsdb --db-user=3Den= gine --db-password=3DXXX --file=3D/usr/tmp/rhevm-backups/reports-backup --l= og=3D/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do: Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=3Drestore --file=3D~/ovirt-engine-source --log=3Dbackup.= log also, I would run a second engine-setup After that, you should be good to go.. Of course, depending on your previous engine setup this could be a little m= ore complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup =20 Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner= engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file: host all engine 127.0.0.1/32<http://127.0.0.1/32> tr= ust 2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run: /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted fro= m this document due to problems starting the reporting service. 2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de<mailto:s.kiesk= e@mittwald.de>>: On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :) By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed. HTH -- Mit freundlichen Gr??en / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG K?nigsberger Stra?e 6 32339 Espelkamp T: +49-5772-293-100<tel:%2B49-5772-293-100> F: +49-5772-293-333<tel:%2B49-5772-293-333> https://www.mittwald.de Gesch?ftsf?hrer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplement?rin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@ovirt.org<mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users --_000_594A1C0F4D3E4A4984BC1026D9911258mboxde_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
</span>F: +49/30/2408781-10</div> <div><br> </div> <div><a href=3D"http://www.m-box.de">www.m-box.de</a> </div> <div><a href=3D"http://www.monkeymen.tv">www.monkeymen.tv</a></div> <div><br> </div> <div>Geschäftsführer: Martin Retschitzegger / Michaela Gölln= er</div> <div>Handeslregister: Amtsgericht Charlottenburg / HRB 112767</div> </div> On 07.11.2014, at 15:24, Koen Vanoppen <<a href=3D"mailto:vanoppen.koen@= gmail.com">vanoppen.koen@gmail.com</a>> wrote:<br> <br> </div> <blockquote type=3D"cite"> <div> <div dir=3D"ltr"> <div>Hi,<br> <br> </div> We had a consulting partner who did the same for our company. This is his p= rocedure and worked great:<br> <br> How to migrate ovirt management engine<br> Packages<br> Ensure you have the same packages & versions installed on the destinati= on hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions a= re 100%identical.<br> Default setup<br> <br> Run 'engine-setup' on the destination host after installing the packages. U= se<br>
</head> <body dir=3D"auto"> <div><span></span></div> <div> <div><br> </div> <div> <div> <div>Daniel Helgenberger</div> <div>m box bewegtbild GmbH</div> <div><br> </div> <div>ACKERSTR. 19<span class=3D"Apple-tab-span" style=3D"white-space:pre"> = </span>P: +49/30/2408781-22</div> <div>D-10115 BERLIN<span class=3D"Apple-tab-span" style=3D"white-space:pre"= the following configuration:<br> 1. Backup existing configuration<br> 2. On the source host, do:<br> </div> </div> </blockquote> You might want your consultant take a look on [1]...<br> Steps a-3d: <div>engine-backup mode=3Dbackup --file=3D~/ovirt-engine-source --log=3Dbac= kup.log</div> <div><br> <blockquote type=3D"cite"> <div> <div dir=3D"ltr">a. service ovirt-engine stop<br> b. service ovirt-engine-dwhd stop<br> c. mkdir ~/backup<br> d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engi= ne-pki.tar.gz .<br> e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-c= onf.tar.gz .<br> f. cd /usr/share/ovirt-engine/dbscripts <br> g. ./backup.sh<br> h. mv engine_*.sql ~/backup/engine.sql<br> 3. You may also want to backup dwh & reports:<br> a. cd /usr/share/ovirt-engine/bin/<br> b. ./engine-backup.sh --mode=3Dbackup --scope=3Ddb --db-u= ser=3Dengine --db-password=3DXXX --file=3D/usr/tmp/rhevm-backups/engine-bac= kup --log=3D/tmp/engine-backup.log<br> c. ./engine-backup.sh --mode=3Dbackup --scope=3Ddwhdb --d= b-user=3Dengine --db-password=3DXXX --file=3D/usr/tmp/rhevm-backups/dwh-bac= kup --log=3D/tmp/engine-backup.log<br> d. ./engine-backup.sh --mode=3Dbackup --scope=3Dreportsdb= --db-user=3Dengine --db-password=3DXXX --file=3D/usr/tmp/rhevm-backups/rep= orts-backup --log=3D/tmp/engine-backup.log<br> 4. Download these backup files, and copy them to the dest= ination host.<br> Restore configuration<br> 1. On the destination host, do:<br> </div> </div> </blockquote> <div>Again, steps a-h, basically</div> <div>engine-setup</div> <div>engine-cleanup</div> <div><span style=3D"background-color: rgba(255, 255, 255, 0);">engine-backu= p mode=3Drestore --file=3D~/ovirt-engine-source --log=3Dbackup.log</span></= div> <div><br> </div> <div>also, I would run a second </div> <div>engine-setup</div> <div>After that, you should be good to go..</div> <div><br> </div> <div>Of course, depending on your previous engine setup this could be a lit= tle more complicated. Still, quite strait forward.</div> <div>[1] <a href=3D"http://www.ovirt.org/Ovirt-engine-backup">http://w= ww.ovirt.org/Ovirt-engine-backup</a></div> <blockquote type=3D"cite"> <div> <div dir=3D"ltr">a. service ovirt-engine stop<br> b. service ovirt-engine-dwhd stop<br> c. cd backup<br> d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.ta= r.gz<br> e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf= .tar.gz<br> f. tar -xvjf engine-backup<br> g. tar -xvjf dwh-backup<br> h. tar -xvjf reports-backup<br> <br> Restore Database<br> 1. On the destination host do:<br> a. su - postgres -c "psql -d template1 -c 'drop data= base engine;'"<br> b. su - postgres -c "psql -d template1 -c 'cre= ate database engine owner engine;'"<br> c. su - postgres<br> d. psql<br> e. \c engine<br> f. \i /path/to/backup/engine.sql<br> NOTE: in case you have issues logging in to the database, add the following= <br> line to the pg_hba.conf file:<br> <br> host all = engine <a href=3D"http://127.0.0.1/32">127.0.0.1/3= 2</a> trust<br> <br> 2. Fix engine password:<br> a. su - postgres<br> b. psql<br> c. alter user engine with password 'XXXXXXX';<br> Change ovirt hostname<br> On the destination host, run:<br> <br> /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename<br> <br> <br> <br> <br> <br> <br> <br> <br> NB:<br> Restoring the dwh/reports database is similar to steps 5-7, but omitted fro= m<br> this document due to problems starting the reporting service.<br> <br> </div> <div class=3D"gmail_extra"><br> <div class=3D"gmail_quote">2014-11-07 10:28 GMT+01:00 Sven Kieske <span= dir=3D"ltr"><<a href=3D"mailto:s.kieske@mittwald.de" target=3D"_blank">= s.kieske@mittwald.de</a>></span>:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p= x #ccc solid;padding-left:1ex"> <span class=3D""><br> <br> On 07/11/14 10:10, Ml Ml wrote:<br> > anyone? :)<br> ><br> > Or are you only doing backups, no restore? :-P<br> <br> </span>gladly I just had to test disaster recovery and not actually<br> perform it (yet) :D<br> <br> To be honest: I never have restored ovirt-engine with running vdsm<br=
hosts connected to it, sounds like a lot of fun, I see if I can<br> grab some time and try this out myself :)<br> <br> By your description I guess you have nfs/iso domain on your engine host?<br=
why don't you just seperate it, so no need for remounts<br> if your engine is destroyed.<br> <br> HTH<br> <br> --<br> Mit freundlichen Grüßen / Regards<br> <br> Sven Kieske<br> <br> Systemadministrator<br> Mittwald CM Service GmbH & Co. KG<br> Königsberger Straße 6<br> 32339 Espelkamp<br> T: <a href=3D"tel:%2B49-5772-293-100" value=3D"+495772293100">+49-5= 772-293-100</a><br> F: <a href=3D"tel:%2B49-5772-293-333" value=3D"+495772293333">+49-5= 772-293-333</a><br> <a href=3D"https://www.mittwald.de" target=3D"_blank">https://www.mittwald.= de</a><br> Geschäftsführer: Robert Meyer<br> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen<= br> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynh= ausen<br> <div class=3D"HOEnZb"> <div class=3D"h5">_______________________________________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" target=3D"_blank"=
http://lists.ovirt.org/mailman/listinfo/users</a><br> </div> </div> </blockquote> </div> <br> </div> </div> </blockquote> <blockquote type=3D"cite"> <div><span>_______________________________________________</span><br> <span>Users mailing list</span><br> <span><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a></span><br> <span><a href=3D"http://lists.ovirt.org/mailman/listinfo/users">http://list= s.ovirt.org/mailman/listinfo/users</a></span><br> </div> </blockquote> </div> </div> </body> </html>
--_000_594A1C0F4D3E4A4984BC1026D9911258mboxde_--

Hi, Actually it's very simple as described in the docs. Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways. ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong. Did it some time ago because I needed more performance. VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :)) I would give it 15-30 min/ Cheers, Matt 2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I dunno why this is all so simple for you. I just replaced the ovirt-engine like described in the docs. I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN. But i have still problems with my storage. Its a replicated glusterfs. It looks healthy on the nodes itself. But somehow my ovirt-engine gets confused. Can someone explain me what the actual error is? Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN: 2014-11-11 18:32:37,832 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:37,833 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended: taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished 2014-11-11 18:32:37,834 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:37,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended, spm status: Free 2014-11-11 18:32:37,889 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd 2014-11-11 18:32:37,937 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, HSMClearTaskVDSCommand, log id: 547e26fd 2014-11-11 18:32:37,938 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97, log id: 461eb5b5 2014-11-11 18:32:37,941 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:37,948 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:38,006 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover 2014-11-11 18:32:38,044 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756 2014-11-11 18:32:38,045 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:38,048 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:38,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1a6ccb9c 2014-11-11 18:32:38,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba 2014-11-11 18:32:38,193 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53}, log id: 7a110756 2014-11-11 18:32:38,352 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e 2014-11-11 18:32:38,433 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cd3b51c4}, log id: 2f25d56e 2014-11-11 18:32:39,117 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:39,118 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished 2014-11-11 18:32:39,119 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:39,171 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended, spm status: Free 2014-11-11 18:32:39,173 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0 2014-11-11 18:32:39,220 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, HSMClearTaskVDSCommand, log id: 46abf4a0 2014-11-11 18:32:39,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d3782f7, log id: 1a6ccb9c 2014-11-11 18:32:39,224 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:39,232 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:39,235 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH, ActivateStorageDomainVDSCommand, log id: 75877740 2014-11-11 18:32:39,236 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed (Failed with error ENGINE and code 5001) 2014-11-11 18:32:39,239 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId = abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown]. 2014-11-11 18:32:39,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID: 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin 2014-11-11 18:32:40,566 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]] 2014-11-11 18:32:40,569 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] HostName = ovirt-node02.foobar.net 2014-11-11 18:32:40,570 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM 2014-11-11 18:32:40,625 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:40,628 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:40,630 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1f3ac280 2014-11-11 18:32:40,687 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef 2014-11-11 18:32:41,735 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:41,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished 2014-11-11 18:32:41,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:41,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended, spm status: Free 2014-11-11 18:32:41,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287 2014-11-11 18:32:41,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, HSMClearTaskVDSCommand, log id: 852d287 2014-11-11 18:32:41,840 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@32b92b73, log id: 1f3ac280 2014-11-11 18:32:41,843 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:41,851 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:41,909 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover 2014-11-11 18:32:41,928 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds - ovirt-node01.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:41,930 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:41,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 56dfcc3c 2014-11-11 18:32:41,984 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 2014-11-11 18:32:42,993 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:42,994 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished 2014-11-11 18:32:42,995 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:43,048 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended, spm status: Free 2014-11-11 18:32:43,049 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, HSMClearTaskVDSCommand, log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d9b9905, log id: 56dfcc3c 2014-11-11 18:32:43,101 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:43,108 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:43,444 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47 2014-11-11 18:32:43,585 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a5d949dc}, log id: 12ae9c47 2014-11-11 18:32:43,745 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9 2014-11-11 18:32:43,826 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@10521f1b}, log id: 4b994fd9 2014-11-11 18:32:48,838 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-71) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37 Thanks, Mario On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014@gmail.com> wrote:
Hi,
Actually it's very simple as described in the docs.
Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways.
ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong.
Did it some time ago because I needed more performance.
VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :))
I would give it 15-30 min/
Cheers,
Matt
2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Anyone? :-( On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml <mliebherr99@googlemail.com> wrote:
I dunno why this is all so simple for you.
I just replaced the ovirt-engine like described in the docs.
I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
But i have still problems with my storage. Its a replicated glusterfs. It looks healthy on the nodes itself. But somehow my ovirt-engine gets confused. Can someone explain me what the actual error is?
Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
2014-11-11 18:32:37,832 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:37,833 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended: taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished 2014-11-11 18:32:37,834 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:37,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended, spm status: Free 2014-11-11 18:32:37,889 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd 2014-11-11 18:32:37,937 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, HSMClearTaskVDSCommand, log id: 547e26fd 2014-11-11 18:32:37,938 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97, log id: 461eb5b5 2014-11-11 18:32:37,941 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:37,948 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:38,006 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover 2014-11-11 18:32:38,044 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756 2014-11-11 18:32:38,045 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:38,048 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:38,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1a6ccb9c 2014-11-11 18:32:38,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba 2014-11-11 18:32:38,193 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53}, log id: 7a110756 2014-11-11 18:32:38,352 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e 2014-11-11 18:32:38,433 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cd3b51c4}, log id: 2f25d56e 2014-11-11 18:32:39,117 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:39,118 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished 2014-11-11 18:32:39,119 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:39,171 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended, spm status: Free 2014-11-11 18:32:39,173 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0 2014-11-11 18:32:39,220 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, HSMClearTaskVDSCommand, log id: 46abf4a0 2014-11-11 18:32:39,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d3782f7, log id: 1a6ccb9c 2014-11-11 18:32:39,224 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:39,232 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:39,235 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH, ActivateStorageDomainVDSCommand, log id: 75877740 2014-11-11 18:32:39,236 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed (Failed with error ENGINE and code 5001) 2014-11-11 18:32:39,239 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId = abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown]. 2014-11-11 18:32:39,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID: 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin 2014-11-11 18:32:40,566 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value
TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
2014-11-11 18:32:40,569 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] HostName = ovirt-node02.foobar.net 2014-11-11 18:32:40,570 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM 2014-11-11 18:32:40,625 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:40,628 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:40,630 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1f3ac280 2014-11-11 18:32:40,687 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef 2014-11-11 18:32:41,735 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:41,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished 2014-11-11 18:32:41,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:41,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended, spm status: Free 2014-11-11 18:32:41,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287 2014-11-11 18:32:41,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, HSMClearTaskVDSCommand, log id: 852d287 2014-11-11 18:32:41,840 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@32b92b73, log id: 1f3ac280 2014-11-11 18:32:41,843 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:41,851 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:41,909 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover 2014-11-11 18:32:41,928 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds - ovirt-node01.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:41,930 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:41,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 56dfcc3c 2014-11-11 18:32:41,984 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 2014-11-11 18:32:42,993 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:42,994 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished 2014-11-11 18:32:42,995 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:43,048 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended, spm status: Free 2014-11-11 18:32:43,049 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, HSMClearTaskVDSCommand, log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d9b9905, log id: 56dfcc3c 2014-11-11 18:32:43,101 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:43,108 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:43,444 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47 2014-11-11 18:32:43,585 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a5d949dc}, log id: 12ae9c47 2014-11-11 18:32:43,745 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9 2014-11-11 18:32:43,826 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@10521f1b}, log id: 4b994fd9 2014-11-11 18:32:48,838 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-71) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37
Thanks, Mario
On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014@gmail.com> wrote:
Hi,
Actually it's very simple as described in the docs.
Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways.
ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong.
Did it some time ago because I needed more performance.
VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :))
I would give it 15-30 min/
Cheers,
Matt
2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Sorry, no idea. Does not seem very related to hosted-engine. Perhaps better to change the subject (add 'gluster'?) to attract other people. Also please post all relevant logs - hosted-engine, vdsm, all engine logs. -- Didi ----- Original Message -----
From: "Ml Ml" <mliebherr99@googlemail.com> To: "Matt ." <yamakasi.014@gmail.com> Cc: users@ovirt.org Sent: Wednesday, November 12, 2014 3:06:04 PM Subject: Re: [ovirt-users] replace ovirt engine host
Anyone? :-(
On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml <mliebherr99@googlemail.com> wrote:
I dunno why this is all so simple for you.
I just replaced the ovirt-engine like described in the docs.
I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
But i have still problems with my storage. Its a replicated glusterfs. It looks healthy on the nodes itself. But somehow my ovirt-engine gets confused. Can someone explain me what the actual error is?
Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
2014-11-11 18:32:37,832 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:37,833 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended: taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished 2014-11-11 18:32:37,834 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:37,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended, spm status: Free 2014-11-11 18:32:37,889 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd 2014-11-11 18:32:37,937 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, HSMClearTaskVDSCommand, log id: 547e26fd 2014-11-11 18:32:37,938 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97, log id: 461eb5b5 2014-11-11 18:32:37,941 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:37,948 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:38,006 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover 2014-11-11 18:32:38,044 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756 2014-11-11 18:32:38,045 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:38,048 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:38,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1a6ccb9c 2014-11-11 18:32:38,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba 2014-11-11 18:32:38,193 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53}, log id: 7a110756 2014-11-11 18:32:38,352 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e 2014-11-11 18:32:38,433 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cd3b51c4}, log id: 2f25d56e 2014-11-11 18:32:39,117 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:39,118 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished 2014-11-11 18:32:39,119 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:39,171 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended, spm status: Free 2014-11-11 18:32:39,173 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0 2014-11-11 18:32:39,220 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, HSMClearTaskVDSCommand, log id: 46abf4a0 2014-11-11 18:32:39,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d3782f7, log id: 1a6ccb9c 2014-11-11 18:32:39,224 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:39,232 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:39,235 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH, ActivateStorageDomainVDSCommand, log id: 75877740 2014-11-11 18:32:39,236 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed (Failed with error ENGINE and code 5001) 2014-11-11 18:32:39,239 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId = abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown]. 2014-11-11 18:32:39,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID: 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin 2014-11-11 18:32:40,566 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value
TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
2014-11-11 18:32:40,569 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] HostName = ovirt-node02.foobar.net 2014-11-11 18:32:40,570 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM 2014-11-11 18:32:40,625 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:40,628 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:40,630 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1f3ac280 2014-11-11 18:32:40,687 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef 2014-11-11 18:32:41,735 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:41,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished 2014-11-11 18:32:41,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:41,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended, spm status: Free 2014-11-11 18:32:41,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287 2014-11-11 18:32:41,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, HSMClearTaskVDSCommand, log id: 852d287 2014-11-11 18:32:41,840 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@32b92b73, log id: 1f3ac280 2014-11-11 18:32:41,843 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:41,851 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:41,909 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover 2014-11-11 18:32:41,928 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds - ovirt-node01.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:41,930 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:41,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 56dfcc3c 2014-11-11 18:32:41,984 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 2014-11-11 18:32:42,993 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:42,994 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished 2014-11-11 18:32:42,995 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:43,048 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended, spm status: Free 2014-11-11 18:32:43,049 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, HSMClearTaskVDSCommand, log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d9b9905, log id: 56dfcc3c 2014-11-11 18:32:43,101 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:43,108 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:43,444 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47 2014-11-11 18:32:43,585 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a5d949dc}, log id: 12ae9c47 2014-11-11 18:32:43,745 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9 2014-11-11 18:32:43,826 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@10521f1b}, log id: 4b994fd9 2014-11-11 18:32:48,838 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-71) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37
Thanks, Mario
On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014@gmail.com> wrote:
Hi,
Actually it's very simple as described in the docs.
Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways.
ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong.
Did it some time ago because I needed more performance.
VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :))
I would give it 15-30 min/
Cheers,
Matt
2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Il 12/11/2014 14:06, Ml Ml ha scritto:
Anyone? :-(
Dan, Nir, can you take a look?
On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml <mliebherr99@googlemail.com> wrote:
I dunno why this is all so simple for you.
I just replaced the ovirt-engine like described in the docs.
I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
But i have still problems with my storage. Its a replicated glusterfs. It looks healthy on the nodes itself. But somehow my ovirt-engine gets confused. Can someone explain me what the actual error is?
Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
2014-11-11 18:32:37,832 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:37,833 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended: taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished 2014-11-11 18:32:37,834 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:37,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended, spm status: Free 2014-11-11 18:32:37,889 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd 2014-11-11 18:32:37,937 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, HSMClearTaskVDSCommand, log id: 547e26fd 2014-11-11 18:32:37,938 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97, log id: 461eb5b5 2014-11-11 18:32:37,941 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:37,948 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:38,006 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover 2014-11-11 18:32:38,044 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756 2014-11-11 18:32:38,045 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:38,048 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:38,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1a6ccb9c 2014-11-11 18:32:38,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba 2014-11-11 18:32:38,193 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53}, log id: 7a110756 2014-11-11 18:32:38,352 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e 2014-11-11 18:32:38,433 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cd3b51c4}, log id: 2f25d56e 2014-11-11 18:32:39,117 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:39,118 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished 2014-11-11 18:32:39,119 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:39,171 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended, spm status: Free 2014-11-11 18:32:39,173 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0 2014-11-11 18:32:39,220 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, HSMClearTaskVDSCommand, log id: 46abf4a0 2014-11-11 18:32:39,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d3782f7, log id: 1a6ccb9c 2014-11-11 18:32:39,224 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:39,232 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:39,235 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH, ActivateStorageDomainVDSCommand, log id: 75877740 2014-11-11 18:32:39,236 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed (Failed with error ENGINE and code 5001) 2014-11-11 18:32:39,239 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId = abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown]. 2014-11-11 18:32:39,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID: 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin 2014-11-11 18:32:40,566 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value
TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
2014-11-11 18:32:40,569 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] HostName = ovirt-node02.foobar.net 2014-11-11 18:32:40,570 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM 2014-11-11 18:32:40,625 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:40,628 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:40,630 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1f3ac280 2014-11-11 18:32:40,687 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef 2014-11-11 18:32:41,735 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:41,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished 2014-11-11 18:32:41,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:41,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended, spm status: Free 2014-11-11 18:32:41,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287 2014-11-11 18:32:41,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, HSMClearTaskVDSCommand, log id: 852d287 2014-11-11 18:32:41,840 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@32b92b73, log id: 1f3ac280 2014-11-11 18:32:41,843 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:41,851 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:41,909 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover 2014-11-11 18:32:41,928 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds - ovirt-node01.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:41,930 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:41,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 56dfcc3c 2014-11-11 18:32:41,984 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 2014-11-11 18:32:42,993 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:42,994 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished 2014-11-11 18:32:42,995 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:43,048 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended, spm status: Free 2014-11-11 18:32:43,049 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, HSMClearTaskVDSCommand, log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d9b9905, log id: 56dfcc3c 2014-11-11 18:32:43,101 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:43,108 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:43,444 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47 2014-11-11 18:32:43,585 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a5d949dc}, log id: 12ae9c47 2014-11-11 18:32:43,745 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9 2014-11-11 18:32:43,826 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@10521f1b}, log id: 4b994fd9 2014-11-11 18:32:48,838 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-71) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37
Thanks, Mario
On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014@gmail.com> wrote:
Hi,
Actually it's very simple as described in the docs.
Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways.
ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong.
Did it some time ago because I needed more performance.
VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :))
I would give it 15-30 min/
Cheers,
Matt
2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote:
anyone? :)
Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Here is the vdsm log of my ovirt-node01: fda6e0ee-33e9-4eb2-b724-34f7a5492e83::ERROR::2014-11-12 16:13:20,071::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1',) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,071::sp::336::Storage.StoragePool::(_shutDownUpgrade) Shutting down upgrade process fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,071::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Request was made in '/usr/share/vdsm/storage/sp.py' line '338' at '_shutDownUpgrade' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,071::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' for lock type 'exclusive' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,072::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is free. Now locking as 'exclusive' (1 active user) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,072::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Granted request fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,072::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Request was made in '/usr/share/vdsm/storage/sp.py' line '358' at '_shutDownUpgrade' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' for lock type 'exclusive' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is free. Now locking as 'exclusive' (1 active user) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Granted request fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::616::ResourceManager::(releaseResource) Trying to release resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::635::ResourceManager::(releaseResource) Released resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' (0 active users) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::641::ResourceManager::(releaseResource) Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is free, finding out if anyone is waiting for it. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::649::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1', Clearing records. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::616::ResourceManager::(releaseResource) Trying to release resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::635::ResourceManager::(releaseResource) Released resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' (0 active users) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::resourceManager::641::ResourceManager::(releaseResource) Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is free, finding out if anyone is waiting for it. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::resourceManager::649::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d', Clearing records. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,076::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=RaidVolBGluster', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=HP_Proliant_DL18 0G6', 'POOL_DOMAINS=6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1:Active,abc51e26-7175-4b38-b3a8-95c6928fbc2b:Active', 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=0', 'POOL_UUID=b384b3da-02a6-44f3-a3f6-56751ce8c26d', 'REMOTE_PATH=127.0.0.1:/RaidVolB', 'ROLE=Master', 'SDUUID=abc51e26-7175-4b38-b3a8-95c6928fbc2b', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=9b444340971e2506b55bfe1d4 a662fde62adbeaa'] fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,082::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction fda6e0ee-33e9-4eb2-b724-34f7a5492e83::INFO::2014-11-12 16:13:20,082::clusterlock::279::SANLock::(release) Releasing cluster lock for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-28::DEBUG::2014-11-12 16:13:20,270::BindingXMLRPC::1067::vds::(wrapper) client [192.168.150.8]::call volumesList with () {} flowID [58a6ac1e] Thread-28::DEBUG::2014-11-12 16:13:20,403::BindingXMLRPC::1074::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'RaidVolB': {'transportType': ['TCP'], 'uuid': 'd46619e9-9368-4e82-bf3a-a2377b6e85e4', 'bricks': ['ovirt-node01.foobar.net:/raidvol/volb', 'ovirt-node02.foobar.net:/raidvol/volb'], 'volume Name': 'RaidVolB', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name': 'ovirt-node01.foobar.net:/raidvol/volb', 'hostUuid': 'de2a515f-c728-449d-b91c-d80cabe0539f'}, {'name': 'ovirt-node02.foobar.net:/raidvol/volb', 'hostUuid': '7540f5c0-c4ba-4 520-bdf1-3115c10d0eea'}], 'options': {'user.cifs': 'disable', 'storage.owner-gid': '36', 'storage.owner-uid': '36', 'nfs.disable': 'on', 'auth.allow': '*'}}}} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,460::clusterlock::289::SANLock::(release) Cluster lock for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b successfully released fda6e0ee-33e9-4eb2-b724-34f7a5492e83::ERROR::2014-11-12 16:13:20,460::task::866::TaskManager.Task::(_setError) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 873, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 334, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/sp.py", line 296, in startSpm self._updateDomainsRole() File "/usr/share/vdsm/storage/securable.py", line 75, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 205, in _updateDomainsRole domain = sdCache.produce(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1',) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::885::TaskManager.Task::(_run) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Task._run: fda6e0ee-33e9-4eb2-b724-34f7a5492e83 () {} failed - stopping task fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::1211::TaskManager.Task::(stop) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::stopping in state running (force False) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::990::TaskManager.Task::(_decref) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::ref 1 aborting True fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::916::TaskManager.Task::(_runJobs) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::aborting: Task is aborted: 'Storage domain does not exist' - code 358 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::990::TaskManager.Task::(_decref) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::ref 0 aborting True fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::925::TaskManager.Task::(_doAbort) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Task._doAbort: force False fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state running -> state aborting fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::550::TaskManager.Task::(__state_aborting) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::_aborting: recover policy auto fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state aborting -> state racquiring fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state racquiring -> state recovering fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::798::TaskManager.Task::(_recover) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::_recover fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::805::TaskManager.Task::(_recover) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::running recovery None fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::786::TaskManager.Task::(_recoverDone) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Recover Done: state recovering fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state recovering -> state recovered fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d': < ResourceRef 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d', isValid: 'True' obj: 'None'>} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::616::ResourceManager::(releaseResource) Trying to release resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::635::ResourceManager::(releaseResource) Released resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d' (0 active users) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,465::resourceManager::641::ResourceManager::(releaseResource) Resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d' is free, finding out if anyone is waiting for it. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,465::resourceManager::649::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d', Clearing records. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,465::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 0 Thread-28::DEBUG::2014-11-12 16:13:20,940::BindingXMLRPC::251::vds::(wrapper) client [192.168.150.8] flowID [4c2997b8] Thread-28::DEBUG::2014-11-12 16:13:20,941::task::595::TaskManager.Task::(_updateState) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::moving from state init -> state preparing Thread-28::INFO::2014-11-12 16:13:20,941::logUtils::44::dispatcher::(wrapper) Run and protect: getTaskStatus(taskID='fda6e0ee-33e9-4eb2-b724-34f7a5492e83', spUUID=None, options=None) Thread-28::DEBUG::2014-11-12 16:13:20,941::taskManager::93::TaskManager::(getTaskStatus) Entry. taskID: fda6e0ee-33e9-4eb2-b724-34f7a5492e83 Thread-28::DEBUG::2014-11-12 16:13:20,941::taskManager::96::TaskManager::(getTaskStatus) Return. Response: {'code': 358, 'message': 'Storage domain does not exist', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': 'fda6e0ee-33e9-4eb2-b724-34f7a5492e83'} Thread-28::INFO::2014-11-12 16:13:20,941::logUtils::47::dispatcher::(wrapper) Run and protect: getTaskStatus, Return response: {'taskStatus': {'code': 358, 'message': 'Storage domain does not exist', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': 'fda6e0ee-33e9-4eb2-b724-34f7a5492e83'}} Thread-28::DEBUG::2014-11-12 16:13:20,942::task::1185::TaskManager.Task::(prepare) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::finished: {'taskStatus': {'code': 358, 'message': 'Storage domain does not exist', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': 'fda6e0ee-33e9-4eb2-b724-34f7a5492e83'}} Thread-28::DEBUG::2014-11-12 16:13:20,942::task::595::TaskManager.Task::(_updateState) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::moving from state preparing -> state finished Thread-28::DEBUG::2014-11-12 16:13:20,942::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-28::DEBUG::2014-11-12 16:13:20,942::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-28::DEBUG::2014-11-12 16:13:20,942::task::990::TaskManager.Task::(_decref) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::ref 0 aborting False Thread-28::DEBUG::2014-11-12 16:13:20,951::BindingXMLRPC::251::vds::(wrapper) client [192.168.150.8] flowID [4c2997b8] Thread-28::DEBUG::2014-11-12 16:13:20,952::task::595::TaskManager.Task::(_updateState) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::moving from state init -> state preparing Thread-28::INFO::2014-11-12 16:13:20,952::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='b384b3da-02a6-44f3-a3f6-56751ce8c26d', options=None) Thread-28::INFO::2014-11-12 16:13:20,956::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': -1, 'spmStatus': 'Free', 'spmLver': -1}} Thread-28::DEBUG::2014-11-12 16:13:20,957::task::1185::TaskManager.Task::(prepare) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::finished: {'spm_st': {'spmId': -1, 'spmStatus': 'Free', 'spmLver': -1}} Thread-28::DEBUG::2014-11-12 16:13:20,957::task::595::TaskManager.Task::(_updateState) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::moving from state preparing -> state finished Thread-28::DEBUG::2014-11-12 16:13:20,957::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-28::DEBUG::2014-11-12 16:13:20,957::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-28::DEBUG::2014-11-12 16:13:20,957::task::990::TaskManager.Task::(_decref) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::ref 0 aborting False Thread-28::DEBUG::2014-11-12 16:13:21,006::BindingXMLRPC::251::vds::(wrapper) client [192.168.150.8] flowID [4c2997b8] Thread-28::DEBUG::2014-11-12 16:13:21,006::task::595::TaskManager.Task::(_updateState) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::moving from state init -> state preparing Thread-28::INFO::2014-11-12 16:13:21,006::logUtils::44::dispatcher::(wrapper) Run and protect: clearTask(taskID='fda6e0ee-33e9-4eb2-b724-34f7a5492e83', spUUID=None, options=None) Thread-28::DEBUG::2014-11-12 16:13:21,007::taskManager::161::TaskManager::(clearTask) Entry. taskID: fda6e0ee-33e9-4eb2-b724-34f7a5492e83 Thread-28::DEBUG::2014-11-12 16:13:21,007::taskManager::166::TaskManager::(clearTask) Return. Thread-28::INFO::2014-11-12 16:13:21,007::logUtils::47::dispatcher::(wrapper) Run and protect: clearTask, Return response: None Thread-28::DEBUG::2014-11-12 16:13:21,007::task::1185::TaskManager.Task::(prepare) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::finished: None Thread-28::DEBUG::2014-11-12 16:13:21,007::task::595::TaskManager.Task::(_updateState) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::moving from state preparing -> state finished Thread-28::DEBUG::2014-11-12 16:13:21,007::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-28::DEBUG::2014-11-12 16:13:21,008::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-28::DEBUG::2014-11-12 16:13:21,008::task::990::TaskManager.Task::(_decref) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::ref 0 aborting False Again: i only replaced my ovirt-engine host by a backup restore. What could cause this problem? Thanks, Mario On Wed, Nov 12, 2014 at 2:16 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il 12/11/2014 14:06, Ml Ml ha scritto:
Anyone? :-(
Dan, Nir, can you take a look?
On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml <mliebherr99@googlemail.com> wrote:
I dunno why this is all so simple for you.
I just replaced the ovirt-engine like described in the docs.
I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
But i have still problems with my storage. Its a replicated glusterfs. It looks healthy on the nodes itself. But somehow my ovirt-engine gets confused. Can someone explain me what the actual error is?
Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
2014-11-11 18:32:37,832 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:37,833 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended: taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished 2014-11-11 18:32:37,834 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:37,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended, spm status: Free 2014-11-11 18:32:37,889 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd 2014-11-11 18:32:37,937 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, HSMClearTaskVDSCommand, log id: 547e26fd 2014-11-11 18:32:37,938 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97, log id: 461eb5b5 2014-11-11 18:32:37,941 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:37,948 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:38,006 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover 2014-11-11 18:32:38,044 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756 2014-11-11 18:32:38,045 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:38,048 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:38,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1a6ccb9c 2014-11-11 18:32:38,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba 2014-11-11 18:32:38,193 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53}, log id: 7a110756 2014-11-11 18:32:38,352 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e 2014-11-11 18:32:38,433 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cd3b51c4}, log id: 2f25d56e 2014-11-11 18:32:39,117 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:39,118 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished 2014-11-11 18:32:39,119 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:39,171 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended, spm status: Free 2014-11-11 18:32:39,173 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0 2014-11-11 18:32:39,220 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, HSMClearTaskVDSCommand, log id: 46abf4a0 2014-11-11 18:32:39,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d3782f7, log id: 1a6ccb9c 2014-11-11 18:32:39,224 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:39,232 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:39,235 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH, ActivateStorageDomainVDSCommand, log id: 75877740 2014-11-11 18:32:39,236 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed (Failed with error ENGINE and code 5001) 2014-11-11 18:32:39,239 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId = abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown]. 2014-11-11 18:32:39,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID: 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin 2014-11-11 18:32:40,566 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value
TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
2014-11-11 18:32:40,569 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] HostName = ovirt-node02.foobar.net 2014-11-11 18:32:40,570 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM 2014-11-11 18:32:40,625 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:40,628 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:40,630 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1f3ac280 2014-11-11 18:32:40,687 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef 2014-11-11 18:32:41,735 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:41,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished 2014-11-11 18:32:41,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:41,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended, spm status: Free 2014-11-11 18:32:41,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287 2014-11-11 18:32:41,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, HSMClearTaskVDSCommand, log id: 852d287 2014-11-11 18:32:41,840 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@32b92b73, log id: 1f3ac280 2014-11-11 18:32:41,843 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:41,851 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:41,909 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover 2014-11-11 18:32:41,928 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds - ovirt-node01.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:41,930 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:41,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 56dfcc3c 2014-11-11 18:32:41,984 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 2014-11-11 18:32:42,993 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:42,994 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished 2014-11-11 18:32:42,995 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:43,048 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended, spm status: Free 2014-11-11 18:32:43,049 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, HSMClearTaskVDSCommand, log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d9b9905, log id: 56dfcc3c 2014-11-11 18:32:43,101 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:43,108 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:43,444 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47 2014-11-11 18:32:43,585 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a5d949dc}, log id: 12ae9c47 2014-11-11 18:32:43,745 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9 2014-11-11 18:32:43,826 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@10521f1b}, log id: 4b994fd9 2014-11-11 18:32:48,838 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-71) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37
Thanks, Mario
On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014@gmail.com> wrote:
Hi,
Actually it's very simple as described in the docs.
Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways.
ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong.
Did it some time ago because I needed more performance.
VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :))
I would give it 15-30 min/
Cheers,
Matt
2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>:
On 07/11/14 10:10, Ml Ml wrote: > anyone? :) > > Or are you only doing backups, no restore? :-P
gladly I just had to test disaster recovery and not actually perform it (yet) :D
To be honest: I never have restored ovirt-engine with running vdsm hosts connected to it, sounds like a lot of fun, I see if I can grab some time and try this out myself :)
By your description I guess you have nfs/iso domain on your engine host? why don't you just seperate it, so no need for remounts if your engine is destroyed.
HTH
-- Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

Hi Mario, Please open a bug for this. Include these logs in the bug for the ovirt engine host, one hypervisor node that had no trouble, and one hypervisor node that had trouble (ovirt-node01?). /var/log/mesages /var/log/sanlock.log /var/log/vdsm.log And of course engine.log for the engine node. Thanks, Nir ----- Original Message -----
From: "Ml Ml" <mliebherr99@googlemail.com> To: "Sandro Bonazzola" <sbonazzo@redhat.com> Cc: "Matt ." <yamakasi.014@gmail.com>, users@ovirt.org, "Dan Kenigsberg" <danken@redhat.com>, "Nir Soffer" <nsoffer@redhat.com> Sent: Wednesday, November 12, 2014 5:18:56 PM Subject: Re: [ovirt-users] replace ovirt engine host
Here is the vdsm log of my ovirt-node01:
fda6e0ee-33e9-4eb2-b724-34f7a5492e83::ERROR::2014-11-12 16:13:20,071::sp::330::Storage.StoragePool::(startSpm) failed: Storage domain does not exist: ('6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1',) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,071::sp::336::Storage.StoragePool::(_shutDownUpgrade) Shutting down upgrade process fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,071::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Request was made in '/usr/share/vdsm/storage/sp.py' line '338' at '_shutDownUpgrade' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,071::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' for lock type 'exclusive' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,072::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is free. Now locking as 'exclusive' (1 active user) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,072::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d`ReqID=`7ec0dd55-0b56-4d8a-bc21-5aa6fe2ec373`::Granted request fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,072::resourceManager::198::ResourceManager.Request::(__init__) ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Request was made in '/usr/share/vdsm/storage/sp.py' line '358' at '_shutDownUpgrade' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::542::ResourceManager::(registerResource) Trying to register resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' for lock type 'exclusive' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::601::ResourceManager::(registerResource) Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is free. Now locking as 'exclusive' (1 active user) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::238::ResourceManager.Request::(grant) ResName=`Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1`ReqID=`a6bd57b0-5ac0-459a-a4c2-2a5a58c4b1ea`::Granted request fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::616::ResourceManager::(releaseResource) Trying to release resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,073::resourceManager::635::ResourceManager::(releaseResource) Released resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' (0 active users) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::641::ResourceManager::(releaseResource) Resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1' is free, finding out if anyone is waiting for it. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::649::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.upgrade_6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1', Clearing records. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::616::ResourceManager::(releaseResource) Trying to release resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,074::resourceManager::635::ResourceManager::(releaseResource) Released resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' (0 active users) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::resourceManager::641::ResourceManager::(releaseResource) Resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d' is free, finding out if anyone is waiting for it. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::resourceManager::649::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.upgrade_b384b3da-02a6-44f3-a3f6-56751ce8c26d', Clearing records. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,075::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,076::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=RaidVolBGluster', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=HP_Proliant_DL18 0G6', 'POOL_DOMAINS=6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1:Active,abc51e26-7175-4b38-b3a8-95c6928fbc2b:Active', 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=0', 'POOL_UUID=b384b3da-02a6-44f3-a3f6-56751ce8c26d', 'REMOTE_PATH=127.0.0.1:/RaidVolB', 'ROLE=Master', 'SDUUID=abc51e26-7175-4b38-b3a8-95c6928fbc2b', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=9b444340971e2506b55bfe1d4 a662fde62adbeaa'] fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,082::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction fda6e0ee-33e9-4eb2-b724-34f7a5492e83::INFO::2014-11-12 16:13:20,082::clusterlock::279::SANLock::(release) Releasing cluster lock for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b Thread-28::DEBUG::2014-11-12 16:13:20,270::BindingXMLRPC::1067::vds::(wrapper) client [192.168.150.8]::call volumesList with () {} flowID [58a6ac1e] Thread-28::DEBUG::2014-11-12 16:13:20,403::BindingXMLRPC::1074::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'RaidVolB': {'transportType': ['TCP'], 'uuid': 'd46619e9-9368-4e82-bf3a-a2377b6e85e4', 'bricks': ['ovirt-node01.foobar.net:/raidvol/volb', 'ovirt-node02.foobar.net:/raidvol/volb'], 'volume Name': 'RaidVolB', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name': 'ovirt-node01.foobar.net:/raidvol/volb', 'hostUuid': 'de2a515f-c728-449d-b91c-d80cabe0539f'}, {'name': 'ovirt-node02.foobar.net:/raidvol/volb', 'hostUuid': '7540f5c0-c4ba-4 520-bdf1-3115c10d0eea'}], 'options': {'user.cifs': 'disable', 'storage.owner-gid': '36', 'storage.owner-uid': '36', 'nfs.disable': 'on', 'auth.allow': '*'}}}} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,460::clusterlock::289::SANLock::(release) Cluster lock for domain abc51e26-7175-4b38-b3a8-95c6928fbc2b successfully released fda6e0ee-33e9-4eb2-b724-34f7a5492e83::ERROR::2014-11-12 16:13:20,460::task::866::TaskManager.Task::(_setError) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 873, in _run return fn(*args, **kargs) File "/usr/share/vdsm/storage/task.py", line 334, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/share/vdsm/storage/sp.py", line 296, in startSpm self._updateDomainsRole() File "/usr/share/vdsm/storage/securable.py", line 75, in wrapper return method(self, *args, **kwargs) File "/usr/share/vdsm/storage/sp.py", line 205, in _updateDomainsRole domain = sdCache.produce(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 98, in produce domain.getRealDomain() File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain return self._cache._realProduce(self._sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce domain = self._findDomain(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('6d882c77-cdbc-48ef-ae21-1a6d45e7f8a1',) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::885::TaskManager.Task::(_run) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Task._run: fda6e0ee-33e9-4eb2-b724-34f7a5492e83 () {} failed - stopping task fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::1211::TaskManager.Task::(stop) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::stopping in state running (force False) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::990::TaskManager.Task::(_decref) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::ref 1 aborting True fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,461::task::916::TaskManager.Task::(_runJobs) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::aborting: Task is aborted: 'Storage domain does not exist' - code 358 fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::990::TaskManager.Task::(_decref) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::ref 0 aborting True fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::925::TaskManager.Task::(_doAbort) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Task._doAbort: force False fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state running -> state aborting fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,462::task::550::TaskManager.Task::(__state_aborting) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::_aborting: recover policy auto fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state aborting -> state racquiring fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state racquiring -> state recovering fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::798::TaskManager.Task::(_recover) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::_recover fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::805::TaskManager.Task::(_recover) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::running recovery None fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::786::TaskManager.Task::(_recoverDone) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::Recover Done: state recovering fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,463::task::595::TaskManager.Task::(_updateState) Task=`fda6e0ee-33e9-4eb2-b724-34f7a5492e83`::moving from state recovering -> state recovered fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d': < ResourceRef 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d', isValid: 'True' obj: 'None'>} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::616::ResourceManager::(releaseResource) Trying to release resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d' fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,464::resourceManager::635::ResourceManager::(releaseResource) Released resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d' (0 active users) fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,465::resourceManager::641::ResourceManager::(releaseResource) Resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d' is free, finding out if anyone is waiting for it. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,465::resourceManager::649::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.b384b3da-02a6-44f3-a3f6-56751ce8c26d', Clearing records. fda6e0ee-33e9-4eb2-b724-34f7a5492e83::DEBUG::2014-11-12 16:13:20,465::threadPool::57::Misc.ThreadPool::(setRunningTask) Number of running tasks: 0 Thread-28::DEBUG::2014-11-12 16:13:20,940::BindingXMLRPC::251::vds::(wrapper) client [192.168.150.8] flowID [4c2997b8] Thread-28::DEBUG::2014-11-12 16:13:20,941::task::595::TaskManager.Task::(_updateState) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::moving from state init -> state preparing Thread-28::INFO::2014-11-12 16:13:20,941::logUtils::44::dispatcher::(wrapper) Run and protect: getTaskStatus(taskID='fda6e0ee-33e9-4eb2-b724-34f7a5492e83', spUUID=None, options=None) Thread-28::DEBUG::2014-11-12 16:13:20,941::taskManager::93::TaskManager::(getTaskStatus) Entry. taskID: fda6e0ee-33e9-4eb2-b724-34f7a5492e83 Thread-28::DEBUG::2014-11-12 16:13:20,941::taskManager::96::TaskManager::(getTaskStatus) Return. Response: {'code': 358, 'message': 'Storage domain does not exist', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': 'fda6e0ee-33e9-4eb2-b724-34f7a5492e83'} Thread-28::INFO::2014-11-12 16:13:20,941::logUtils::47::dispatcher::(wrapper) Run and protect: getTaskStatus, Return response: {'taskStatus': {'code': 358, 'message': 'Storage domain does not exist', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': 'fda6e0ee-33e9-4eb2-b724-34f7a5492e83'}} Thread-28::DEBUG::2014-11-12 16:13:20,942::task::1185::TaskManager.Task::(prepare) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::finished: {'taskStatus': {'code': 358, 'message': 'Storage domain does not exist', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': 'fda6e0ee-33e9-4eb2-b724-34f7a5492e83'}} Thread-28::DEBUG::2014-11-12 16:13:20,942::task::595::TaskManager.Task::(_updateState) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::moving from state preparing -> state finished Thread-28::DEBUG::2014-11-12 16:13:20,942::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-28::DEBUG::2014-11-12 16:13:20,942::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-28::DEBUG::2014-11-12 16:13:20,942::task::990::TaskManager.Task::(_decref) Task=`60c56406-16d3-4dcd-986f-41f2bc1f78cb`::ref 0 aborting False Thread-28::DEBUG::2014-11-12 16:13:20,951::BindingXMLRPC::251::vds::(wrapper) client [192.168.150.8] flowID [4c2997b8] Thread-28::DEBUG::2014-11-12 16:13:20,952::task::595::TaskManager.Task::(_updateState) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::moving from state init -> state preparing Thread-28::INFO::2014-11-12 16:13:20,952::logUtils::44::dispatcher::(wrapper) Run and protect: getSpmStatus(spUUID='b384b3da-02a6-44f3-a3f6-56751ce8c26d', options=None) Thread-28::INFO::2014-11-12 16:13:20,956::logUtils::47::dispatcher::(wrapper) Run and protect: getSpmStatus, Return response: {'spm_st': {'spmId': -1, 'spmStatus': 'Free', 'spmLver': -1}} Thread-28::DEBUG::2014-11-12 16:13:20,957::task::1185::TaskManager.Task::(prepare) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::finished: {'spm_st': {'spmId': -1, 'spmStatus': 'Free', 'spmLver': -1}} Thread-28::DEBUG::2014-11-12 16:13:20,957::task::595::TaskManager.Task::(_updateState) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::moving from state preparing -> state finished Thread-28::DEBUG::2014-11-12 16:13:20,957::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-28::DEBUG::2014-11-12 16:13:20,957::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-28::DEBUG::2014-11-12 16:13:20,957::task::990::TaskManager.Task::(_decref) Task=`a421f847-c259-4bdf-929a-b2df3568e881`::ref 0 aborting False Thread-28::DEBUG::2014-11-12 16:13:21,006::BindingXMLRPC::251::vds::(wrapper) client [192.168.150.8] flowID [4c2997b8] Thread-28::DEBUG::2014-11-12 16:13:21,006::task::595::TaskManager.Task::(_updateState) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::moving from state init -> state preparing Thread-28::INFO::2014-11-12 16:13:21,006::logUtils::44::dispatcher::(wrapper) Run and protect: clearTask(taskID='fda6e0ee-33e9-4eb2-b724-34f7a5492e83', spUUID=None, options=None) Thread-28::DEBUG::2014-11-12 16:13:21,007::taskManager::161::TaskManager::(clearTask) Entry. taskID: fda6e0ee-33e9-4eb2-b724-34f7a5492e83 Thread-28::DEBUG::2014-11-12 16:13:21,007::taskManager::166::TaskManager::(clearTask) Return. Thread-28::INFO::2014-11-12 16:13:21,007::logUtils::47::dispatcher::(wrapper) Run and protect: clearTask, Return response: None Thread-28::DEBUG::2014-11-12 16:13:21,007::task::1185::TaskManager.Task::(prepare) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::finished: None Thread-28::DEBUG::2014-11-12 16:13:21,007::task::595::TaskManager.Task::(_updateState) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::moving from state preparing -> state finished Thread-28::DEBUG::2014-11-12 16:13:21,007::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-28::DEBUG::2014-11-12 16:13:21,008::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-28::DEBUG::2014-11-12 16:13:21,008::task::990::TaskManager.Task::(_decref) Task=`8c502838-deb0-41a6-a981-8b34acdb71c9`::ref 0 aborting False
Again: i only replaced my ovirt-engine host by a backup restore.
What could cause this problem?
Thanks, Mario
On Wed, Nov 12, 2014 at 2:16 PM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
Il 12/11/2014 14:06, Ml Ml ha scritto:
Anyone? :-(
Dan, Nir, can you take a look?
On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml <mliebherr99@googlemail.com> wrote:
I dunno why this is all so simple for you.
I just replaced the ovirt-engine like described in the docs.
I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
But i have still problems with my storage. Its a replicated glusterfs. It looks healthy on the nodes itself. But somehow my ovirt-engine gets confused. Can someone explain me what the actual error is?
Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
2014-11-11 18:32:37,832 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:37,833 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended: taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished 2014-11-11 18:32:37,834 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:37,888 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended, spm status: Free 2014-11-11 18:32:37,889 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd 2014-11-11 18:32:37,937 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, HSMClearTaskVDSCommand, log id: 547e26fd 2014-11-11 18:32:37,938 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@5027ed97, log id: 461eb5b5 2014-11-11 18:32:37,941 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:37,948 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:38,006 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover 2014-11-11 18:32:38,044 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756 2014-11-11 18:32:38,045 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:38,048 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:38,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1a6ccb9c 2014-11-11 18:32:38,108 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba 2014-11-11 18:32:38,193 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@9746ef53}, log id: 7a110756 2014-11-11 18:32:38,352 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e 2014-11-11 18:32:38,433 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-29) FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@cd3b51c4}, log id: 2f25d56e 2014-11-11 18:32:39,117 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:39,118 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished 2014-11-11 18:32:39,119 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:39,171 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended, spm status: Free 2014-11-11 18:32:39,173 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0 2014-11-11 18:32:39,220 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, HSMClearTaskVDSCommand, log id: 46abf4a0 2014-11-11 18:32:39,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d3782f7, log id: 1a6ccb9c 2014-11-11 18:32:39,224 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:39,232 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] IrsBroker::Failed::ActivateStorageDomainVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:39,235 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH, ActivateStorageDomainVDSCommand, log id: 75877740 2014-11-11 18:32:39,236 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed (Failed with error ENGINE and code 5001) 2014-11-11 18:32:39,239 INFO [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-39) [4777665a] Command [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId = abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown]. 2014-11-11 18:32:39,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID: 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin 2014-11-11 18:32:40,566 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value
TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
2014-11-11 18:32:40,569 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] HostName = ovirt-node02.foobar.net 2014-11-11 18:32:40,570 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Command HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM 2014-11-11 18:32:40,625 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:40,628 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:40,630 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 1f3ac280 2014-11-11 18:32:40,687 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef 2014-11-11 18:32:41,735 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:41,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished 2014-11-11 18:32:41,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:41,790 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended, spm status: Free 2014-11-11 18:32:41,791 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] START, HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3, taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287 2014-11-11 18:32:41,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, HSMClearTaskVDSCommand, log id: 852d287 2014-11-11 18:32:41,840 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [47871083] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@32b92b73, log id: 1f3ac280 2014-11-11 18:32:41,843 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:41,851 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:41,909 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover 2014-11-11 18:32:41,928 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds - ovirt-node01.foobar.net, spmStatus Free, storage pool HP_Proliant_DL180G6 2014-11-11 18:32:41,930 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1, LVER -1 2014-11-11 18:32:41,932 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId = b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 56dfcc3c 2014-11-11 18:32:41,984 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 2014-11-11 18:32:42,993 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in HSMGetTaskStatusVDS method 2014-11-11 18:32:42,994 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished 2014-11-11 18:32:42,995 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-11-11 18:32:43,048 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended, spm status: Free 2014-11-11 18:32:43,049 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] START, HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, HSMClearTaskVDSCommand, log id: 5abaa4ce 2014-11-11 18:32:43,098 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@7d9b9905, log id: 56dfcc3c 2014-11-11 18:32:43,101 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool 2014-11-11 18:32:43,108 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-28) [725b57af] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-11-11 18:32:43,444 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47 2014-11-11 18:32:43,585 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a5d949dc}, log id: 12ae9c47 2014-11-11 18:32:43,745 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START, GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net, HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9 2014-11-11 18:32:43,826 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH, GlusterVolumesListVDSCommand, return: {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@10521f1b}, log id: 4b994fd9 2014-11-11 18:32:48,838 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-71) START, GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net, HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37
Thanks, Mario
On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014@gmail.com> wrote:
Hi,
Actually it's very simple as described in the docs.
Just stop the engine, make a backup, copy it over, place it back and start it. You can do this in a several of ways.
ISO domains is which I would remove and recreate again. ISO domains are actually dumb domains, so nothing can go wrong.
Did it some time ago because I needed more performance.
VDSM can run without the engine, it doesn't need it as the egine monitors and does the commands, so when it's not there... VM's just run (until you make them die yourself :))
I would give it 15-30 min/
Cheers,
Matt
2014-11-07 18:36 GMT+01:00 Daniel Helgenberger <daniel.helgenberger@m-box.de>:
Daniel Helgenberger m box bewegtbild GmbH
ACKERSTR. 19 P: +49/30/2408781-22 D-10115 BERLIN F: +49/30/2408781-10
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner Handeslregister: Amtsgericht Charlottenburg / HRB 112767 On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen@gmail.com> wrote:
Hi,
We had a consulting partner who did the same for our company. This is his procedure and worked great:
How to migrate ovirt management engine Packages Ensure you have the same packages & versions installed on the destination hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions are 100%identical. Default setup
Run 'engine-setup' on the destination host after installing the packages. Use the following configuration: 1. Backup existing configuration 2. On the source host, do:
You might want your consultant take a look on [1]... Steps a-3d: engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. mkdir ~/backup d. tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz . e. tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz . f. cd /usr/share/ovirt-engine/dbscripts g. ./backup.sh h. mv engine_*.sql ~/backup/engine.sql 3. You may also want to backup dwh & reports: a. cd /usr/share/ovirt-engine/bin/ b. ./engine-backup.sh --mode=backup --scope=db --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup --log=/tmp/engine-backup.log c. ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup --log=/tmp/engine-backup.log d. ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup --log=/tmp/engine-backup.log 4. Download these backup files, and copy them to the destination host. Restore configuration 1. On the destination host, do:
Again, steps a-h, basically engine-setup engine-cleanup engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
also, I would run a second engine-setup After that, you should be good to go..
Of course, depending on your previous engine setup this could be a little more complicated. Still, quite strait forward. [1] http://www.ovirt.org/Ovirt-engine-backup
a. service ovirt-engine stop b. service ovirt-engine-dwhd stop c. cd backup d. tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz e. tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz f. tar -xvjf engine-backup g. tar -xvjf dwh-backup h. tar -xvjf reports-backup
Restore Database 1. On the destination host do: a. su - postgres -c "psql -d template1 -c 'drop database engine;'" b. su - postgres -c "psql -d template1 -c 'create database engine owner engine;'" c. su - postgres d. psql e. \c engine f. \i /path/to/backup/engine.sql NOTE: in case you have issues logging in to the database, add the following line to the pg_hba.conf file:
host all engine 127.0.0.1/32 trust
2. Fix engine password: a. su - postgres b. psql c. alter user engine with password 'XXXXXXX'; Change ovirt hostname On the destination host, run:
/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
NB: Restoring the dwh/reports database is similar to steps 5-7, but omitted from this document due to problems starting the reporting service.
2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske@mittwald.de>: > > > > On 07/11/14 10:10, Ml Ml wrote: >> anyone? :) >> >> Or are you only doing backups, no restore? :-P > > gladly I just had to test disaster recovery and not actually > perform it (yet) :D > > To be honest: I never have restored ovirt-engine with running vdsm > hosts connected to it, sounds like a lot of fun, I see if I can > grab some time and try this out myself :) > > By your description I guess you have nfs/iso domain on your engine > host? > why don't you just seperate it, so no need for remounts > if your engine is destroyed. > > HTH > > -- > Mit freundlichen Grüßen / Regards > > Sven Kieske > > Systemadministrator > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 6 > 32339 Espelkamp > T: +49-5772-293-100 > F: +49-5772-293-333 > https://www.mittwald.de > Geschäftsführer: Robert Meyer > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad > Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad > Oeynhausen > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On 11/12/2014 09:16 PM, Nir Soffer wrote:
Hi Mario,
Please open a bug for this.
Include these logs in the bug for the ovirt engine host, one hypervisor node that had no trouble, and one hypervisor node that had trouble (ovirt-node01?).
/var/log/mesages /var/log/sanlock.log /var/log/vdsm.log
And of course engine.log for the engine node.
Please also include glusterfs logs from: /var/log/glusterfs Thanks, Vijay
participants (10)
-
Daniel Helgenberger
-
Koen Vanoppen
-
Matt .
-
Ml Ml
-
ml ml
-
Nir Soffer
-
Sandro Bonazzola
-
Sven Kieske
-
Vijay Bellur
-
Yedidyah Bar David