Help creating virtual machines in ovirt
by Grant Tailor
I have installed and configured ovirt but i am having issues creating
virtual machine
here is the error i am getting when i try to "run once"
*Error while executing action Run VM once: Network error during
communication with the Host.*
10 years, 6 months
Hosted Engine - Waiting for cluster 'Default' to become operational...
by Tobias Honacker
Hi all,
i hit this "bug" yesterday.
Packages:
ovirt-host-deploy-1.2.0-1.el6.noarch
ovirt-engine-sdk-python-3.4.0.7-1.el6.noarch
ovirt-hosted-engine-setup-1.1.2-1.el6.noarch
ovirt-release-11.2.0-1.noarch
ovirt-hosted-engine-ha-1.1.2-1.el6.noarch
After setting up the hosted engine (running great) the setup canceled with
this MSG:
[ INFO ] The VDSM Host is now operational
[ ERROR ] Waiting for cluster 'Default' to become operational...
[ ERROR ] Failed to execute stage 'Closing up': 'NoneType' object has
no attribute '__dict__'
[ INFO ] Stage: Clean up
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
What is the next step i have to do that the HA features of the
hosted-engine will take care of keeping the VM alive.
best regards
tobias
10 years, 6 months
vnic profile custom properties menu not visible in 3.4.1
by Gianluca Cecchi
Hello,
I have an all-in-one environment based on f19 and 3.4.1.
My host main interface (and so ovirtmgmt bridge) is 192.168.1.x
I'm trying to setup a natted network for my VMs (it will be 192.168.125.x).
I already completed the vdsm and libvirt part following Dan blog page here:
http://developerblog.redhat.com/2014/02/25/extending-rhev-vdsm-hooks/
I created a new vnic profile named "natted" for my ovirtmgmt network.
But I'm not able to see in webadmin how to add a custom property "extnet"
to this vnic profile and set it to the "natted" value.
I already restarted vdsmd and then ovirt-engine (not a full restart of the
server yet, after installing vdsm-hook-extnet-4.14.8.1-0.fc19 package
See below my screenshots, I see no way to add it.
https://drive.google.com/file/d/0BwoPbcrMv8mvZF93ekI2a1V2clk/edit?usp=sha...
https://drive.google.com/file/d/0BwoPbcrMv8mvbkFVZEF3X2VhNVE/edit?usp=sha...
Is it perhaps a command line only option?
Based on this page I wouldn't expect so:
http://www.ovirt.org/Features/Vnic_Profiles
I see instead a "Please select a key" dropdown menu where I can only select
"Security Groups"...
NOTE: I need this because I have to setup an openvpn tunnel between the
server where I have the all-in-one setup and a remote network.
Unfortunately both the networks, despite different internet providers, use
192.168.1.x for their internal networks (arghhhh! add more fantasy to
providers menu please... so many private networks available: don't stop to
the first one... ;-)
so that I can't establish routing between the two networks after tunnel
setup. I'm trying to solve using a VM with another network via NAT and
setting up the openvpn tunnel from this VM, hoping it would be able to
route then with the 192.168.1.x destination internal network....
Thanks
Gianluca
10 years, 6 months
Removing Snapshot sub tasks
by Mohyedeen Nazzal
Greetings,
When removing a snapshot, why does the following sub task is executed:
- Merging snapshot of disk DiskNname ?
Attached a screenshot of the tasks executed.
I'm just wondering why there is a need to perform merge ?
Thanks,
Mohyedeen
10 years, 6 months
Re: [ovirt-users] engine upgrade 3.2.2 --> 3.2.3 Database rename failed (Solved)
by Neil
Hi guys,
I've managed to resolve this problem. Firstly after doing my fresh
re-install of my original rollback of Dreyou 3.2.2 ovirt-engine
re-install I hadn't run engine-cleanup, and then when I restored my DB
I used the "restore.sh -u postgres -f /root/ovirt.sql" instead of
doing a manual db restore which between the two of them got rid of the
issue. I'm assuming it was the engine-cleanup that sorted out the db
renaming problem though.
Once that was done I then managed to upgrade to 3.3 and I'll now do
the 3.4 upgrade.
Thanks very much for those who assisted.
Regards.
Neil Wilson.
On Thu, May 22, 2014 at 6:12 AM, Neil <nwilson123(a)gmail.com> wrote:
> Hi guys, sorry to repost but getting a bit desperate. Is anyone able to
> assist?
>
> Thanks.
>
> Regards.
>
> Neil Wilson
>
> On 21 May 2014 12:06 PM, "Neil" <nwilson123(a)gmail.com> wrote:
>>
>> Hi guys,
>>
>> Just a little more info on the problem. I've upgraded another oVirt
>> system before from Dreyou and it worked perfectly, however on this
>> particular system, we had to restore from backups (DB PKI and
>> etc/ovirt-engine) as the physical machine died that was hosting the
>> engine, so perhaps this is the reason we encountering this problem
>> this time around...
>>
>> Any help is greatly appreciated.
>>
>> Thank you.
>>
>> Regards.
>>
>> Neil Wilson.
>>
>>
>>
>> On Wed, May 21, 2014 at 11:46 AM, Sven Kieske <S.Kieske(a)mittwald.de>
>> wrote:
>> > Hi,
>> >
>> > I don't know the exact resolution for this, but I'll add some people
>> > who managed to make it work, following this tutorial:
>> > http://wiki.dreyou.org/dokuwiki/doku.php?id=ovirt_rpm_start33
>> >
>> > See this thread on the users ML:
>> >
>> > http://lists.ovirt.org/pipermail/users/2013-December/018341.html
>> >
>> > HTH
>> >
>> >
>> > Am 20.05.2014 17:00, schrieb Neil:
>> >> Hi guys,
>> >>
>> >> I'm trying to upgrade from Dreyou to the official repo, I've installed
>> >> the official 3.2 repo (I'll do the 3.3 update once this works). I've
>> >> updated to ovirt-engine-setup.noarch 0:3.2.3-1.el6 and when I run
>> >> engine upgrade it bombs out when trying to rename my database with the
>> >> following error...
>> >>
>> >> [root@engine01 /]# cat
>> >> /var/log/ovirt-engine/ovirt-engine-upgrade_2014_05_20_16_34_21.log
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
>> >> pgpass file /etc/ovirt-engine/.pgpass, fetching DB host value
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
>> >> pgpass file /etc/ovirt-engine/.pgpass, fetching DB port value
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
>> >> pgpass file /etc/ovirt-engine/.pgpass, fetching DB user value
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Loaded plugins: refresh-packagekit, versionlock
>> >> 2014-05-20 16:34:21::INFO::engine-upgrade::969::root:: Info:
>> >> /etc/ovirt-engine/.pgpass file found. Continue.
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
>> >> pgpass file /etc/ovirt-engine/.pgpass, fetching DB admin value
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
>> >> pgpass file /etc/ovirt-engine/.pgpass, fetching DB host value
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::804::root:: found existing
>> >> pgpass file /etc/ovirt-engine/.pgpass, fetching DB port value
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::481::root:: running sql
>> >> query 'SELECT pg_database_size('engine')' on db server: 'localhost'.
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U postgres -d
>> >> postgres -c SELECT pg_database_size('engine')'
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::472::root:: output =
>> >> pg_database_size
>> >> ------------------
>> >> 11976708
>> >> (1 row)
>> >>
>> >>
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
>> >> point of '/var/cache/yum' at '/'
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
>> >> available space on /var/cache/yum
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
>> >> on /var/cache/yum is 172329
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
>> >> point of '/var/lib/ovirt-engine/backups' at '/'
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
>> >> available space on /var/lib/ovirt-engine/backups
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
>> >> on /var/lib/ovirt-engine/backups is 172329
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::1567::root:: Found mount
>> >> point of '/usr/share' at '/'
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::663::root:: Checking
>> >> available space on /usr/share
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::668::root:: Available space
>> >> on /usr/share is 172329
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::1590::root:: Mount points
>> >> are: {'/': {'required': 1511, 'free': 172329}}
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::1599::root:: Comparing free
>> >> space 172329 MB with required 1511 MB
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::481::root:: running sql
>> >> query 'SELECT compatibility_version FROM storage_pool;' on db server:
>> >> 'localhost'.
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d engine -c
>> >> SELECT compatibility_version FROM storage_pool;'
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::472::root:: output =
>> >> compatibility_version
>> >> -----------------------
>> >> 3.2
>> >> (1 row)
>> >>
>> >>
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::481::root:: running sql
>> >> query 'SELECT compatibility_version FROM vds_groups;' on db server:
>> >> 'localhost'.
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d engine -c
>> >> SELECT compatibility_version FROM vds_groups;'
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::472::root:: output =
>> >> compatibility_version
>> >> -----------------------
>> >> 3.2
>> >> (1 row)
>> >>
>> >>
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:21::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:21::DEBUG::engine-upgrade::280::root:: Yum unlock
>> >> started
>> >> 2014-05-20 16:34:21::DEBUG::engine-upgrade::292::root:: Yum unlock
>> >> completed successfully
>> >> 2014-05-20 16:34:22::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdu5SB03tmp.xml (0%)
>> >> 2014-05-20 16:34:22::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdu5SB03tmp.xml 3.7 k(100%)
>> >> 2014-05-20 16:34:30::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdf3Wi70tmp.xml (0%)
>> >> 2014-05-20 16:34:30::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdf3Wi70tmp.xml 2.9 k(100%)
>> >> 2014-05-20 16:34:30::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdf3Wi70tmp.xml 2.9 k(100%)
>> >> 2014-05-20 16:34:31::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdHz6Ctstmp.xml (0%)
>> >> 2014-05-20 16:34:31::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdHz6Ctstmp.xml 3.4 k(100%)
>> >> 2014-05-20 16:34:37::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdyHOcNQtmp.xml (0%)
>> >> 2014-05-20 16:34:37::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdyHOcNQtmp.xml 2.9 k(100%)
>> >> 2014-05-20 16:34:38::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdTvp5RWtmp.xml (0%)
>> >> 2014-05-20 16:34:39::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdTvp5RWtmp.xml 2.9 k(100%)
>> >> 2014-05-20 16:34:40::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdpoFiQgtmp.xml (0%)
>> >> 2014-05-20 16:34:40::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomdpoFiQgtmp.xml 3.4 k(100%)
>> >> 2014-05-20 16:34:41::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomddmRA9ttmp.xml (0%)
>> >> 2014-05-20 16:34:41::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Downloading: repomddmRA9ttmp.xml 951 (100%)
>> >> 2014-05-20 16:34:41::DEBUG::common_utils::332::root:: YUM: VERB: queue
>> >> package ovirt-engine for update
>> >> 2014-05-20 16:34:42::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> package ovirt-engine queued
>> >> 2014-05-20 16:34:42::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Building transaction
>> >> 2014-05-20 16:34:44::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Transaction built
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::314::root:: Transaction
>> >> Summary:
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-backend-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-dbscripts-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-genericapi-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-restapi-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-tools-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-userportal-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::318::root:: update -
>> >> ovirt-engine-webadmin-portal-3.2.3-1.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::329::root:: Yum
>> >> rollback-avail started
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:44::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-backend-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:45::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-dbscripts-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:45::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-genericapi-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:46::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-restapi-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:46::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-tools-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:47::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-userportal-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:47::DEBUG::engine-upgrade::335::root:: Checking
>> >> package ovirt-engine-webadmin-portal-3.2.2-1.1.43.el6.noarch
>> >> 2014-05-20 16:34:48::DEBUG::engine-upgrade::340::root:: Yum
>> >> rollback-avail completed successfully
>> >> 2014-05-20 16:34:48::DEBUG::engine-upgrade::1045::root:: related to
>> >> database package ovirt-engine-backend
>> >> 2014-05-20 16:34:48::DEBUG::engine-upgrade::1045::root:: related to
>> >> database package ovirt-engine-dbscripts
>> >> 2014-05-20 16:34:48::DEBUG::engine-upgrade::200::root:: checking the
>> >> status of ovirt-engine service
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/sbin/service ovirt-engine status'
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::472::root:: output = The
>> >> engine is not running.
>> >>
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::474::root:: retcode = 3
>> >> 2014-05-20 16:34:48::DEBUG::engine-upgrade::595::root:: stopping
>> >> ovirt-engine service.
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/sbin/service ovirt-engine stop'
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::472::root:: output =
>> >> Stopping engine-service: [ OK ]
>> >>
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::1289::root:: getting status
>> >> for engine-notifierd
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::1298::root:: executing
>> >> action engine-notifierd on service status
>> >> 2014-05-20 16:34:48::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/sbin/service engine-notifierd status'
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::472::root:: output =
>> >> /etc/init.d/engine-notifierd is stopped
>> >>
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::474::root:: retcode = 1
>> >> 2014-05-20 16:34:49::DEBUG::engine-upgrade::840::root:: Checking
>> >> active system tasks
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -U engine -f
>> >> /usr/share/ovirt-engine/scripts/add_fn_db_get_async_tasks_function.sql
>> >> -d engine'
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::472::root:: output = DROP
>> >> TYPE
>> >> CREATE TYPE
>> >> CREATE FUNCTION
>> >>
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::473::root:: stderr =
>> >>
>> >> psql:/usr/share/ovirt-engine/scripts/add_fn_db_get_async_tasks_function.sql:18:
>> >> NOTICE: drop cascades to function fn_db_get_async_tasks()
>> >>
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::481::root:: running sql
>> >> query 'select * from fn_db_get_async_tasks();' on db server:
>> >> 'localhost'.
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d engine -c
>> >> select * from fn_db_get_async_tasks();'
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::472::root:: output = dc_id
>> >> | dc_name | spm_host_id | spm_host_name | task_count
>> >> -------+---------+-------------+---------------+------------
>> >> (0 rows)
>> >>
>> >>
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::481::root:: running sql
>> >> query 'select command_type, entity_type from
>> >> business_entity_snapshot;' on db server: 'localhost'.
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d engine -c
>> >> select command_type, entity_type from business_entity_snapshot;'
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::472::root:: output =
>> >> command_type | entity_type
>> >> --------------+-------------
>> >> (0 rows)
>> >>
>> >>
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::481::root:: running sql
>> >> query 'copy (select vds_id, vds_name, host_name, vds_unique_id, status
>> >> from vds) to stdout with csv header;' on db server: 'localhost'.
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d engine -c
>> >> copy (select vds_id, vds_name, host_name, vds_unique_id, status from
>> >> vds) to stdout with csv header;'
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::472::root:: output =
>> >> vds_id,vds_name,host_name,vds_unique_id,status
>> >>
>> >> b108549c-1700-11e2-b936-9f5243b8ce13,node01.ukdm.gov.za,10.251.193.8,4C4C4544-0056-5910-8048-B7C04F43354A,3
>> >>
>> >> 322cbee8-16e6-11e2-9d38-6388c61dd004,node02.ukdm.gov.za,10.251.193.9,4C4C4544-0056-5910-8048-C4C04F43354A,3
>> >>
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:49::DEBUG::engine-upgrade::358::root:: DB Backup
>> >> started
>> >> 2014-05-20 16:34:49::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/pg_dump -C -E UTF8 --disable-dollar-quoting
>> >> --disable-triggers -U engine -h localhost -p 5432 --format=p -f
>> >>
>> >> /var/lib/ovirt-engine/backups/ovirt-engine_db_backup_2014_05_20_16_34_21.sql
>> >> engine'
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::472::root:: output =
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::473::root:: stderr =
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::474::root:: retcode = 0
>> >> 2014-05-20 16:34:51::DEBUG::engine-upgrade::374::root:: DB Backup
>> >> completed successfully
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::481::root:: running sql
>> >> query 'ALTER DATABASE engine RENAME TO engine_2014_05_20_16_34_21' on
>> >> db server: 'localhost'.
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::434::root:: Executing
>> >> command --> '/usr/bin/psql -h localhost -p 5432 -U engine -d template1
>> >> -c ALTER DATABASE engine RENAME TO engine_2014_05_20_16_34_21'
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::472::root:: output =
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::473::root:: stderr = ERROR:
>> >> must be owner of database engine
>> >>
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::474::root:: retcode = 1
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::332::root:: YUM: VERB:
>> >> Performing rollback
>> >> 2014-05-20 16:34:51::DEBUG::common_utils::1377::root:: Locking rpms in
>> >> yum-version-lock
>> >> 2014-05-20 16:34:51::ERROR::engine-upgrade::1159::root:: Traceback
>> >> (most recent call last):
>> >> File "/usr/bin/engine-upgrade", line 1152, in <module>
>> >> main(options)
>> >> File "/usr/bin/engine-upgrade", line 1079, in main
>> >> runFunc([[db.rename, DB_NAME_TEMP]], MSG_INFO_RENAME_DB)
>> >> File "/usr/bin/engine-upgrade", line 621, in runFunc
>> >> func[0](*func[1:])
>> >> File "/usr/bin/engine-upgrade", line 447, in rename
>> >> utils.execRemoteSqlCommand(SERVER_ADMIN, SERVER_NAME, SERVER_PORT,
>> >> basedefs.DB_TEMPLATE, query, True, MSG_ERROR_RENAME_DB)
>> >> File "/usr/share/ovirt-engine/scripts/common_utils.py", line 490, in
>> >> execRemoteSqlCommand
>> >> return execCmd(cmdList=cmd, failOnError=failOnError, msg=errMsg,
>> >> envDict=getPgPassEnv())
>> >> File "/usr/share/ovirt-engine/scripts/common_utils.py", line 477, in
>> >> execCmd
>> >> raise Exception(msg)
>> >> Exception: Error: Database rename failed. Check that there are no
>> >> active connections to the DB and try again.
>> >>
>> >> I'm guessing it's probably something simple, but I'm not much of a
>> >> postgres user, so it's unfortunately a bit beyond me to resolve.
>> >>
>> >> Please could someone point me in the right direction.
>> >>
>> >> Thank you.
>> >>
>> >> Regards
>> >>
>> >> Neil Wilson.
>> >> _______________________________________________
>> >> Users mailing list
>> >> Users(a)ovirt.org
>> >> http://lists.ovirt.org/mailman/listinfo/users
>> >>
>> >>
>> >>
>> >
>> > --
>> > Mit freundlichen Grüßen / Regards
>> >
>> > Sven Kieske
>> >
>> > Systemadministrator
>> > Mittwald CM Service GmbH & Co. KG
>> > Königsberger Straße 6
>> > 32339 Espelkamp
>> > T: +49-5772-293-100
>> > F: +49-5772-293-333
>> > https://www.mittwald.de
>> > Geschäftsführer: Robert Meyer
>> > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
>> > Oeynhausen
>> > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
>> > Oeynhausen
>> > _______________________________________________
>> > Users mailing list
>> > Users(a)ovirt.org
>> > http://lists.ovirt.org/mailman/listinfo/users
10 years, 6 months
[Users] Migrate cluster 3.3 -> 3.4 hosted on existing hosts
by Ted Miller
--------------080102010006090400050504
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit
Current setup:
* 3 identical hosts running on HP GL180 g5 servers
o gluster running 5 volumes in replica 3
* engine running on VMWare Server on another computer (that computer is NOT
available to convert to a host)
Where I want to end up:
* 3 identical hosted-engine hosts running on HP GL180 g5 servers
o gluster running 6 volumes in replica 3
+ new volume will be nfs storage for engine VM
* hosted engine in oVirt VM
* as few changes to current setup as possible
The two pages I found on the wiki are: Hosted Engine Howto
<http://www.ovirt.org/Hosted_Engine_Howto> and Migrate to Hosted Engine
<http://www.ovirt.org/Migrate_to_Hosted_Engine>. Both were written during
the testing process, and have not been updated to reflect production status.
I don't know if anything in the process has changed since they were written.
Process outlined in above two pages (as I understand it):
have nfs file store ready to hold VM
Do minimal install (not clear if ovirt node, Centos, or Fedora was
used--I am Centos-based)
# yum install ovirt-hosted-engine-setup
# hosted-engine --deploy
Install OS on VM
return to host console
at "Please install the engine in the VM" prompt on host
on VM console
# yum install ovirt-engine
on old engine:
service ovirt-engine stop
chkconfig ovirt-engine off
set up dns for new engine
# engine-backup --mode=backup --file=backup1 --log=backup1.log
scp backup file to new engine VM
on new VM:
# engine-backup --mode=restore --file=backup1 --log=backup1-restore.log
--change-db-credentials --db-host=didi-lap --db-user=engine --db-password
--db-name=engine
# engine-setup
on host:
run script until: "The system will wait until the VM is down."
on new VM:
# reboot
on Host: finish script
My questions:
1. Is the above still the recommended way to do a hosted-engine install?
2. Will it blow up at me if I use my existing host (with glusterfs all set
up, etc) as the starting point, instead of a clean install?
Thank you for letting me benefit from your experience,
Ted Miller
Elkhart, IN, USA
--------------080102010006090400050504
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Current setup:<br>
<ul>
<li>3 identical hosts running on HP GL180 g5 servers</li>
<ul>
<li>gluster running 5 volumes in replica 3<br>
</li>
</ul>
<li>engine running on VMWare Server on another computer (that
computer is NOT available to convert to a host)<br>
</li>
</ul>
Where I want to end up:<br>
<ul>
<li>3 identical hosted-engine hosts running on HP GL180 g5 servers</li>
<ul>
<li>gluster running 6 volumes in replica 3</li>
<ul>
<li>new volume will be nfs storage for engine VM</li>
</ul>
</ul>
<li>hosted engine in oVirt VM</li>
<li>as few changes to current setup as possible</li>
</ul>
<p>The two pages I found on the wiki are: <a
href="http://www.ovirt.org/Hosted_Engine_Howto">Hosted Engine
Howto</a> and <a
href="http://www.ovirt.org/Migrate_to_Hosted_Engine">Migrate to
Hosted Engine</a>. Both were written during the testing
process, and have not been updated to reflect production status.
I don't know if anything in the process has changed since they
were written.<br>
</p>
<p>Process outlined in above two pages (as I understand it):<br>
</p>
<blockquote>
<p>have nfs file store ready to hold VM</p>
<p>Do minimal install (not clear if ovirt node, Centos, or Fedora
was used--I am Centos-based)</p>
# yum install ovirt-hosted-engine-setup<br>
# hosted-engine --deploy<br>
<p>Install OS on VM<br>
</p>
<p>return to host console<br>
</p>
<p>at "Please install the engine in the VM" prompt on host<br>
</p>
<p>on VM console<br>
# yum install ovirt-engine<br>
</p>
<p>on old engine: <br>
service ovirt-engine stop<br>
chkconfig ovirt-engine off</p>
<p>set up dns for new engine<br>
</p>
<p># engine-backup --mode=backup --file=backup1 --log=backup1.log<br>
scp backup file to new engine VM<br>
</p>
<p>on new VM:<br>
# engine-backup --mode=restore --file=backup1
--log=backup1-restore.log --change-db-credentials
--db-host=didi-lap --db-user=engine --db-password
--db-name=engine<br>
# engine-setup<br>
</p>
<p>on host:<br>
run script until: "The system will wait until the VM is down."<br>
</p>
<p>on new VM:<br>
# reboot<br>
</p>
<p>on Host: finish script<br>
</p>
</blockquote>
My questions:<br>
<br>
1. Is the above still the recommended way to do a hosted-engine
install?<br>
<br>
2. Will it blow up at me if I use my existing host (with glusterfs
all set up, etc) as the starting point, instead of a clean install?<br>
<br>
Thank you for letting me benefit from your experience,<br>
Ted Miller<br>
Elkhart, IN, USA<br>
<br>
</body>
</html>
--------------080102010006090400050504--
10 years, 6 months
emulated machine error
by Nathanaël Blanchet
Hello
I was used tout run ovirt 3.2.2 installed from the dreyou repo and it has worked like a charm until now. I succeeded to upgrade to 3.3.5 official repository but I didn't pay attention with the host vdsm upgrade and I installed 4.14. So the 3.3.5 web engine complained that it wasn't the appropriated vdsm. Then I decided to upgrade engine to 3.4.0 el6 bit this time none of my 6 hosts get successfully activated and the error message is that host emulated is not the good one. I updated all of them to 6.5 with or without the official qemu it is the same. I remember I changed the 3.3 cluster compatibility for 3.4 but I can't rolling back to 3.3 compatibility (tells me it can't "decrease") What can I do now? Fortunately I saved the initial 3.2 webadmin image so that I can revert back to my to my 3.2 functionnal webadmin.
Is there such reported issue like mine?
Thank you for your help.
10 years, 6 months
SLA : RAM scheduling
by Nathanaël Blanchet
Hello,
On ovirt 3.4, is it possible to schedule vms distribution depending on
host RAM availibility?
Concretly, I had to manually move vms all the vms to the second host of
the cluster, this lead to reach 90% occupation of memory on the
destination host. When my first host has rebooted, none vms of the
second host automatically migrated to the first one which had full RAM.
How to make this happen?
--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
10 years, 6 months
glusterfs tips/questions
by Gabi C
Hello!
I have an ovirt setup, 3.4.1, up-to date, with gluster package 3.5.0-3.fc19
on all 3 nodes. Glusterfs setup is replicated on 3 bricks. On 2 nodes
'gluster peeer status' raise 2 peer connected with it's UUID. On third node
'gluster peer status' raise 3 peers, out of which, two reffer to same
node/IP but different UUID.
What I have tried:
- stopped gluster volumes, put 3rd node in maintenace, reboor -> no effect;
- stopped volumes, removed bricks belonging to 3rd node, readded it, start
volumes but still no effect.
Any ideas, hints?
TIA
10 years, 6 months