Upgrade from 4.3.2 to 4.3.3 fails on database schema update
by eshwayri@gmail.com
Tried to upgrade to 4.3.3, and during engine-setup I get:
2019-04-20 14:24:47,041-0400 Running upgrade sql script '/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql'...
2019-04-20 14:24:47,043-0400 dbfunc_psql_die --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
********* QUERY **********
SELECT fn_db_create_constraint('image_transfers',
'fk_image_transfers_command_enitites',
'FOREIGN KEY (command_id) REFERENCES command_entities(command_id) ON DELETE CASCADE');
**************************
2019-04-20 14:24:47,060-0400 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.executeRaw:863 execute-result: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20190420142153-s5heq0.log', '-c', 'apply'], rc=1
2019-04-20 14:24:47,060-0400 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:921 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20190420142153-s5heq0.log', '-c', 'apply'] stdout:
2019-04-20 14:24:47,061-0400 DEBUG otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema plugin.execute:926 execute-output: ['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p', '5432', '-u', 'engine', '-d', 'engine', '-l', '/var/log/ovirt-engine/setup/ovirt-engine-setup-20190420142153-s5heq0.log', '-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql:3: ERROR: insert or update on table "image_transfers" violates foreign key constraint "fk_image_transfers_command_enitites"
DETAIL: Key (command_id)=(7d68555a-71ee-41f5-9f12-0b26b6b9d449) is not present in table "command_entities".
CONTEXT: SQL statement "ALTER TABLE image_transfers ADD CONSTRAINT fk_image_transfers_command_enitites FOREIGN KEY (command_id) REFERENCES command_entities(command_id) ON DELETE CASCADE"
PL/pgSQL function fn_db_create_constraint(character varying,character varying,text) line 9 at EXECUTE
FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
2019-04-20 14:24:47,061-0400 ERROR otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema schema._misc:432 schema.sh: FATAL: Cannot execute sql command: --file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
2019-04-20 14:24:47,061-0400 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py", line 434, in _misc
raise RuntimeError(_('Engine schema refresh failed'))
RuntimeError: Engine schema refresh failed
2019-04-20 14:24:47,064-0400 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Misc configuration': Engine schema refresh failed
2019-04-20 14:24:47,065-0400 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction'
2019-04-20 14:24:47,065-0400 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback
5 years, 7 months
Re: Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS
by Strahil
Fix those disconnectes node and run find against a node that has successfully mounted the volume.
Best Regards,
Strahil NikolovOn Apr 24, 2019 15:31, Andreas Elvers <andreas.elvers+ovirtforum(a)solutions.work> wrote:
>
> The file handle is stale so find will display:
>
> "find: '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': Transport endpoint is not connected"
>
> "stat /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore" will output
> stat: cannot stat '/rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore': Transport endpoint is not connected
>
> All Nodes are peering with the other nodes:
> -----
> Saiph:~ andreas$ ssh node01 gluster peer status
> Number of Peers: 2
>
> Hostname: node02.infra.solutions.work
> Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb
> State: Peer in Cluster (Connected)
>
> Hostname: node03.infra.solutions.work
> Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26
> State: Peer in Cluster (Connected)
> ----
> Saiph:~ andreas$ ssh node02 gluster peer status
> Number of Peers: 2
>
> Hostname: node03.infra.solutions.work
> Uuid: 49025f81-e7c1-4760-be03-f36e0f403d26
> State: Peer in Cluster (Disconnected)
>
> Hostname: node01.infra.solutions.work
> Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633
> State: Peer in Cluster (Connected)
> ----
> ssh node03 gluster peer status
> Number of Peers: 2
>
> Hostname: node02.infra.solutions.work
> Uuid: 87fab40a-2395-41ce-857d-0b846e078cdb
> State: Peer in Cluster (Connected)
>
> Hostname: node01.infra.solutions.work
> Uuid: f25e6bff-e5e2-465f-a33e-9148bef94633
> State: Peer in Cluster (Connected)
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DI6AWTLIQIP...
5 years, 7 months
error while setup
by W3SERVICES
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
changed: [localhost.localdomain] =>
(item=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
localhost.localdomain)
PLAY RECAP
*********************************************************************
localhost.localdomain : ok=1 changed=1 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Enable or disable services]
**********************************************
ok: [localhost.localdomain] => (item=chronyd)
PLAY RECAP
*********************************************************************
localhost.localdomain : ok=1 changed=0 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
changed: [localhost.localdomain] => (item=chronyd)
PLAY RECAP
*********************************************************************
localhost.localdomain : ok=1 changed=1 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Run a command in the shell]
**********************************************
changed: [localhost.localdomain] => (item=vdsm-tool configure --force)
PLAY RECAP
*********************************************************************
localhost.localdomain : ok=1 changed=1 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
changed: [localhost.localdomain] =>
(item=/usr/share/gdeploy/scripts/blacklist_all_disks.sh)
PLAY RECAP
*********************************************************************
localhost.localdomain : ok=1 changed=1 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Clean up filesystem signature]
*******************************************
skipping: [localhost.localdomain] => (item=/dev/sdb)
TASK [Create Physical Volume]
**************************************************
failed: [localhost.localdomain] (item=/dev/sdb) => {"changed": false,
"failed_when_result": true, "item": "/dev/sdb", "msg": " Device /dev/sdb
not found.\n", "rc": 5}
to retry, use: --limit @/tmp/tmpQtig6r/pvcreate.retry
PLAY RECAP
*********************************************************************
localhost.localdomain : ok=0 changed=0 unreachable=0
failed=1
From
*Sunil Kumar | 09967222267Tech Head and Cloud Solution Architect*
*-----------------------------------------------------------------------*
*Website : *
*https://w3services.net <https://www.w3services.net/>*
https://w3services.com (Tech Consultancy)
Office Tel. +91 09300670068 | Write A Google Review ( Help Us )
<https://goo.gl/tKGa79>
*-----------------------------------------------------------------------*
5 years, 7 months
Arbiter brick disk performance
by Leo David
Hello Everyone,
I need to look into adding some enterprise grade sas disks ( both ssd
and spinning ), and since the prices are not too low, I would like to
benefit of replica 3 arbitrated.
Therefore, I intend to buy some smaller disks for use them as arbiter
brick.
My question is, what performance ( regarding iops, througput ) the arbiter
disks need to be. Should they be at least the same as the real data disks ?
Knowing that they only keep metadata, I am thinking that will not be so
much pressure on the arbiters.
Any thoughts?
Thank you !
--
Best regards, Leo David
5 years, 7 months
Upgrading from 4.2.8 to 4.3.3 broke Node NG GlusterFS
by Andreas Elvers
Hi,
I am currently upgrading my oVirt setup from 4.2.8 to 4.3.3.1.
The setup consists of:
Datacenter/Cluster Default: [fully upgraded to 4.3.3.1]
2 nodes (node04,node05)- NFS storage domain with self hosted engine
Datacenter Luise:
Cluster1: 3 nodes (node01,node02,node03) - Node NG with GlusterFS - Ceph Cinder storage domain
[Node1 and Node3 are upgraded to 4.3.3.1, Node2 is on 4.2.8]
Cluster2: 1 node (node06) - only Ceph Cinder storage domain [fully upgraded to 4.3.3.1]
Problems started when upgrading Luise/Cluster1 with GlusterFS:
(I always waited for GlusterFS to be fully synced before proceeding to the next step)
- Upgrade node01 to 4.3.3 -> OK
- Upgrade node03 to 4.3.3.1 -> OK
- Upgrade node01 to 4.3.3.1 -> GlusterFS became unstable.
I now get the error message:
VDSM node03.infra.solutions.work command ConnectStoragePoolVDS failed: Cannot find master domain: u'spUUID=f3218bf7-6158-4b2b-b272-51cdc3280376, msdUUID=02a32017-cbe6-4407-b825-4e558b784157'
And on node03 there is a problem with Gluster:
node03#: ls -l /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore
ls: cannot access /rhev/data-center/mnt/glusterSD/node01.infra.solutions.work:_vmstore: Transport endpoint is not connected
The directory is available on node01 and node02.
The engine is reporting the brick on node03 as down. Node03 and Node06 are shown as NonOperational, because they are not able to access the gluster storage domain.
A “gluster peer status” on node1, node2, and node3 shows all peers connected.
“gluster volume heal vmstore info” shows for all nodes:
gluster volume heal vmstore info
Brick node01.infra.solutions.work:/gluster_bricks/vmstore/vmstore
Status: Transport endpoint is not connected
Number of entries: -
Brick node02.infra.solutions.work:/gluster_bricks/vmstore/vmstore
<gfid:0bcb7825-e649-4178-a899-c5cc04c95286>
<gfid:71ec8035-f5a5-4e61-bb34-5ad9db28c0eb>
<gfid:16d5961e-c3bb-4493-a51d-bf83074c4cc7>
/02a32017-cbe6-4407-b825-4e558b784157/dom_md/ids
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.66
<gfid:5fe350e4-1eb5-4b6f-a3fb-42c98b7b2f8d>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.60
/02a32017-cbe6-4407-b825-4e558b784157/images/a3a10398-9698-4b73-84d9-9735448e3534/6161e310-4ad6-42d9-8117-5a89c5b2b4b6
<gfid:8eb9fd30-fdb9-442b-9c54-8ba256d7981b>
<gfid:c72001be-e7d3-4b34-bac5-9ab50b609eea>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.96
<gfid:447ec09b-336e-4d2b-8338-f31329ee7a55>
<gfid:9d7db516-d6fb-43d8-a069-dcbc1d72e62a>
/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.133
<gfid:2412d449-d3ed-40ef-b7eb-d81bdf7c5c05>
<gfid:0fae358b-2cdd-4064-b63c-7f31a35bc35a>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.38
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.67
/__DIRECT_IO_TEST__
<gfid:a7945526-9ff3-40fe-b1e2-3921117ef738>
<gfid:e78e9c1f-ce6b-4871-b5bf-9bde34685b99>
/02a32017-cbe6-4407-b825-4e558b784157/images/493188b2-c137-4440-99ee-43a753842a7d/9aa2d139-e3bd-406b-8fe0-b189123eaa73
<gfid:3aed3fb6-044a-4371-9302-e0bd54cbd794>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.64
/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.132
<gfid:f7631be7-2ab5-4985-904d-69174c0e1267>
<gfid:43001625-1aad-4032-a76e-4cc2a51de2b3>
<gfid:6ae3fe7f-15c9-4103-960c-faba0ba59cb3>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.44
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.9
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.69
<gfid:f20cc6f7-9391-4260-9238-3e1d0cabbfa3>
/02a32017-cbe6-4407-b825-4e558b784157/images/12e647fb-20aa-4957-b659-05fa75a9215e/f7e4b2a3-ab84-4eb5-a4e7-7208ddad8156
<gfid:c540368a-4431-4405-9a59-e11a217d0ea6>
<gfid:4e698a74-39dc-40a3-ac9c-14456420ab66>
<gfid:afd48e71-ff23-42d7-aef4-b2e2167b75e8>
<gfid:194589b3-0760-4150-80ef-d87376813835>
<gfid:6e17ead1-88cc-4e3e-84fa-7495c4fc3a0e>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.35
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.32
<gfid:982d071f-2081-4371-b007-bc48d8167e7c>
<gfid:e2285905-a8da-44f5-8c56-1f8a4d6326a8>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.39
<gfid:956c7d9c-2f96-42d1-bf6e-57bc9e534f84>
<gfid:16162bc7-201a-4842-a41a-af2cc4fb8a9e>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.34
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.68
Status: Connected
Number of entries: 47
Brick node03.infra.solutions.work:/gluster_bricks/vmstore/vmstore
/02a32017-cbe6-4407-b825-4e558b784157/images/12e647fb-20aa-4957-b659-05fa75a9215e/f7e4b2a3-ab84-4eb5-a4e7-7208ddad8156
<gfid:c540368a-4431-4405-9a59-e11a217d0ea6>
<gfid:4e698a74-39dc-40a3-ac9c-14456420ab66>
<gfid:afd48e71-ff23-42d7-aef4-b2e2167b75e8>
<gfid:194589b3-0760-4150-80ef-d87376813835>
<gfid:6e17ead1-88cc-4e3e-84fa-7495c4fc3a0e>
<gfid:099284a6-9538-4f9a-928a-d9b704fe0735>
<gfid:75d3c8f7-d67a-49a4-9cd4-7fff202df40d>
<gfid:982d071f-2081-4371-b007-bc48d8167e7c>
<gfid:e2285905-a8da-44f5-8c56-1f8a4d6326a8>
<gfid:447ec09b-336e-4d2b-8338-f31329ee7a55>
<gfid:956c7d9c-2f96-42d1-bf6e-57bc9e534f84>
/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.133
<gfid:43001625-1aad-4032-a76e-4cc2a51de2b3>
<gfid:6ae3fe7f-15c9-4103-960c-faba0ba59cb3>
<gfid:1a0b2737-9172-4c51-aa77-e93e9671840c>
<gfid:eb471e13-6749-4f62-b1f5-15a44f8990c2>
<gfid:a7945526-9ff3-40fe-b1e2-3921117ef738>
<gfid:e78e9c1f-ce6b-4871-b5bf-9bde34685b99>
/02a32017-cbe6-4407-b825-4e558b784157/images/493188b2-c137-4440-99ee-43a753842a7d/9aa2d139-e3bd-406b-8fe0-b189123eaa73
<gfid:6b418e80-9f61-4d6e-ba77-8a1969d9a99b>
<gfid:914c72d2-e45e-48f2-b7ef-5846b13f7a91>
<gfid:2bd28bdb-1dc6-41d5-96be-c696f452e3f2>
<gfid:9d7db516-d6fb-43d8-a069-dcbc1d72e62a>
<gfid:f7631be7-2ab5-4985-904d-69174c0e1267>
<gfid:16162bc7-201a-4842-a41a-af2cc4fb8a9e>
/.shard/40948f85-2212-47f9-bd5e-102a8dd632b8.44
<gfid:afc6d611-528d-441b-b74e-c5fae6672088>
<gfid:707308e9-e8e5-487a-b0f1-a816720c4243>
<gfid:5f1a81a7-7c42-4226-9142-1b5b35c2b1e9>
<gfid:f20cc6f7-9391-4260-9238-3e1d0cabbfa3>
<gfid:0bcb7825-e649-4178-a899-c5cc04c95286>
<gfid:71ec8035-f5a5-4e61-bb34-5ad9db28c0eb>
<gfid:16d5961e-c3bb-4493-a51d-bf83074c4cc7>
/02a32017-cbe6-4407-b825-4e558b784157/dom_md/ids
<gfid:7661180b-1917-4a7b-9749-5dfb826c4449>
<gfid:5fe350e4-1eb5-4b6f-a3fb-42c98b7b2f8d>
<gfid:a6197593-7e09-4d3f-b538-9cd1ebadd6c9>
/02a32017-cbe6-4407-b825-4e558b784157/images/a3a10398-9698-4b73-84d9-9735448e3534/6161e310-4ad6-42d9-8117-5a89c5b2b4b6
<gfid:8eb9fd30-fdb9-442b-9c54-8ba256d7981b>
<gfid:c72001be-e7d3-4b34-bac5-9ab50b609eea>
<gfid:3aed3fb6-044a-4371-9302-e0bd54cbd794>
/.shard/d66880de-3fa1-4362-8c43-574a173c5f7d.132
<gfid:2412d449-d3ed-40ef-b7eb-d81bdf7c5c05>
<gfid:0fae358b-2cdd-4064-b63c-7f31a35bc35a>
<gfid:c0ca2784-a8af-44b3-9091-a1eaf4c8676f>
/__DIRECT_IO_TEST__
Status: Connected
Number of entries: 47
On Node03 there are several self healing processes, that seem to be doing nothing.
Oh well.. What now?
Best regards,
- Andreas
5 years, 7 months
Unable to use MAC address starting with reserved value 0xFE
by Ricardo Alonso
Is there a way to use a mac starting with FE? The machine has a license requirement attached to the mac address, and when I try to start it, it fails with the message:
VM is down with error. Exit message: unsupported configuration: Unable to use MAC address starting with reserved value 0xFE - 'fe:XX:XX:XX:XX:XX'
5 years, 7 months
Upgrade Initiation fails without obvious reason
by Vrgotic, Marko
Dear oVIrt,
Short platform summary:
* 5 Hosts
* 3/5 in Shared DC/Cluster with NetApp NFS storage
* 1/5 with Local Storage
* 1/5 with Local Storage
* oVirt version 4.3.2 , recently upgraded from 4.2.8
Host related: ovirt-staging-hv-04
Host is moved to Maintenance.
Selected Upgrade, following log lines are written:
2019-04-23 10:58:32,639Z INFO [org.ovirt.engine.core.bll.hostdeploy.UpgradeHostCommand] (default task-247) [11381573-01c0-4e48-8634-c577988a8f79] Running command: UpgradeHostCommand internal: false. Entities affected : ID: 9462386e-74d1-4000-96b0-0e081ec7d069 Type: VDSAction group EDIT_HOST_CONFIGURATION with role type ADMIN
2019-04-23 10:58:32,648Z INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-247) [11381573-01c0-4e48-8634-c577988a8f79] EVENT_ID: HOST_UPGRADE_STARTED(840), Host ovirt-staging-hv-04 upgrade was started (User: mvrgotic@ictv.com(a)ictv.com-authz<mailto:mvrgotic@ictv.com@ictv.com-authz>).
However, the host is still in Maintenance mode, expected is Installing:
[cid:image001.png@01D4F9D5.35D3A4C0]
Tail YUM logs also shows the procedure did not start and packages are not being updated.
Kindly awaiting you reply.
Marko Vrgotic
5 years, 7 months
Prevent 2 different VMs from running on the same host
by Paulo Silva
Hi,
I have a cluster of 6 hosts using ovirt 4.3 and I want to make sure that 2
VMs are always started on different hosts.
Is it possible to prevent 2 different VMs from running on the same physical
host without specifying manually a different set of hosts where each VM can
start running?
Thanks
--
Paulo Silva <paulojjs(a)gmail.com>
5 years, 7 months
Scale out ovirt 4.3 (from 3 to 6 or 9 nodes) with hyperconverged setup and Gluster
by adrianquintero@gmail.com
Hello,
I have a 3 node Hyperconverged setup with gluster and added 3 new nodes to the cluster for a total of 6 servers.
I am now taking advantage of more compute power but cant scale out my storage volumes.
Current Hyperconverged setup:
- host1.mydomain.com ---> Bricks: engine data1 vmstore1
- host2.mydomain.com ---> Bricks: engine data1 vmstore1
- host3.mydomain.com ---> Bricks: engine data1 vmstore1
From these 3 servers we get the following Volumes:
- engine(host1:engine, host2:engine, host3:engine)
- data1 (host1:data1, host2:data1, host3:data1)
- vmstore1 (host1:vmstore1, host2:vmstore1, host3:vmstore11)
The following are the newly added servers to the cluster, however as you can see there are no gluster bricks
- host4.mydomain.com
- host5.mydomain.com
- host6.mydomain.com
I know that the bricks must be added in sets of 3 and per the first 3 hosts that is how it was deployed thru the web UI.
Questions:
-how can i extend the gluster volumes engine, data1 and vmstore using host4, host5 and host6?
-Do I need to configure gluster volumes manually through the OS CLI in order for them to span amongst all 6 servers?
-If I configure the fail storage scenario manually will oVirt know about it?, will it still be hyperconverged?
I have only seen 3 host hyperconverged setup examples with gluster, but have not found examples for 6, 9 or 12 host cluster with gluster.
I know it might be a lack of understanding from my end on how ovirt and gluster integrate with one another.
If you can point me in the right direction would be great.
thanks,
5 years, 7 months