Re: Ovirt 4.2.7 won't start and drops to emergency console
by Strahil
Hm...
It's strage it doesn't detect the VG , but could be related to the issue.
Accordi g to this:
lvthinpool_tmeta {
id = "WBut10-rAOP-FzA7-bJvr-ZdxL-lB70-jzz1Tv"
status = ["READ", "WRITE"]
flags = []
creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
creation_host = "vmh.cyber-range.lan"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 16192 # 15.8125 Gigabytes
You got 15GiB of metadata, so create your new metadata LV at least 30 GiB.
Best Regards,
Strahil NikolovOn Oct 2, 2019 04:49, jeremy_tourville(a)hotmail.com wrote:
>
> I don't know why I didn't think to get some more info regarding my storage environment and post it here earlier. My gluster_vg1 volume is on /dev/sda1. I can access the engine storage directory but I think that is because it is not thin provisioned. I guess I was too bogged down in solving the problem when I'm stuck in emergency mode. I had to sneaker net my USB drive to my system so I could capture some info. Anyhow here it is:
>
> # lsblk
> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> sda 8:0 0 5.5T 0 disk
> └─sda1 8:1 0 5.5T 0 part
> sdb 8:16 0 223.6G 0 disk
> ├─sdb1 8:17 0 1G 0 part /boot
> └─sdb2 8:18 0 222.6G 0 part
> └─md127 9:127 0 222.5G 0 raid1
> ├─onn_vmh-swap 253:0 0 4G 0 lvm [SWAP]
> ├─onn_vmh-pool00_tmeta 253:1 0 1G 0 lvm
> │ └─onn_vmh-pool00-tpool 253:3 0 173.6G 0 lvm
> │ ├─onn_vmh-ovirt--node--ng--4.2.7.1--0.20181216.0+1 253:4 0 146.6G 0 lvm /
> │ ├─onn_vmh-pool00 253:5 0 173.6G 0 lvm
> │ ├─onn_vmh-root 253:6 0 146.6G 0 lvm
> │ ├─onn_vmh-home 253:7 0 1G 0 lvm /home
> │ ├─onn_vmh-tmp 253:8 0 1G 0 lvm /tmp
> │ ├─onn_vmh-var 253:9 0 15G 0 lvm /var
> │ ├─onn_vmh-var_log 253:10 0 8G 0 lvm /var/log
> │ ├─onn_vmh-var_log_audit 253:11 0 2G 0 lvm /var/log/audit
> │ └─onn_vmh-var_crash 253:12 0 10G 0 lvm /var/crash
> └─onn_vmh-pool00_tdata 253:2 0 173.6G 0 lvm
> └─onn_vmh-pool00-tpool 253:3 0 173.6G 0 lvm
> ├─onn_vmh-ovirt--node--ng--4.2.7.1--0.20181216.0+1 253:4 0 146.6G 0 lvm /
> ├─onn_vmh-pool00 253:5 0 173.6G 0 lvm
> ├─onn_vmh-root 253:6 0 146.6G 0 lvm
> ├─onn_vmh-home 253:7 0 1G 0 lvm /home
> ├─onn_vmh-tmp 253:8 0 1G 0 lvm /tmp
> ├─onn_vmh-var 253:9 0 15G 0 lvm /var
> ├─onn_vmh-var_log 253:10 0 8G 0 lvm /var/log
> ├─onn_vmh-var_log_audit 253:11 0 2G 0 lvm /var/log/audit
> └─onn_vmh-var_crash 253:12 0 10G 0 lvm /var/crash
> sdc 8:32 0 223.6G 0 disk
> └─sdc1 8:33 0 222.6G 0 part
> └─md127 9:127 0 222.5G 0 raid1
> ├─onn_vmh-swap 253:0 0 4G 0 lvm [SWAP]
> ├─onn_vmh-pool00_tmeta 253:1 0 1G 0 lvm
> │ └─onn_vmh-pool00-tpool 253:3 0 173.6G 0 lvm
> │ ├─onn_vmh-ovirt--node--ng--4.2.7.1--0.20181216.0+1 253:4 0 146.6G 0 lvm /
> │ ├─onn_vmh-pool00 253:5 0 173.6G 0 lvm
> │ ├─onn_vmh-root 253:6 0 146.6G 0 lvm
> │ ├─onn_vmh-home 253:7 0 1G 0 lvm /home
> │ ├─onn_vmh-tmp 253:8 0 1G 0 lvm /tmp
> │ ├─onn_vmh-var 253:9 0 15G 0 lvm /var
> │ ├─onn_vmh-var_log 253:10 0 8G 0 lvm /var/log
> │ ├─onn_vmh-var_log_audit 253:11 0 2G 0 lvm /var/log/audit
> │ └─onn_vmh-var_crash 253:12 0 10G 0 lvm /var/crash
> └─onn_vmh-pool00_tdata 253:2 0 173.6G 0 lvm
> └─onn_vmh-pool00-tpool 253:3 0 173.6G 0 lvm
> ├─onn_vmh-ovirt--node--ng--4.2.7.1--0.20181216.0+1 253:4 0 146.6G 0 lvm /
> ├─onn_vmh-pool00 253:5 0 173.6G 0 lvm
> ├─onn_vmh-root 253:6 0 146.6G 0 lvm
> ├─onn_vmh-home 253:7 0 1G 0 lvm /home
> ├─onn_vmh-tmp 253:8 0 1G 0 lvm /tmp
> ├─onn_vmh-var 253:9 0 15G 0 lvm /var
> ├─onn_vmh-var_log 253:10 0 8G 0 lvm /var/log
> ├─onn_vmh-var_log_audit 253:11 0 2G 0 lvm /var/log/audit
> └─onn_vmh-var_crash 253:12 0 10G 0 lvm /var/crash
> sdd 8:48 0 596.2G 0 disk
> └─sdd1 8:49 0 4G 0 part
> └─gluster_vg3-tmpLV 253:13 0 2G 0 lvm
> sde 8:64 1 7.5G 0 disk
> └─sde1 8:65 1 7.5G 0 part /mnt
>
> # blkid
> /dev/sda1: UUID="f026a2dc-201a-4b43-974e-2419a8783bce" TYPE="xfs" PARTLABEL="Linux filesystem" PARTUUID="4bca8a3a-42f0-4877-aa60-f544bf1fdce7"
> /dev/sdc1: UUID="e5f4acf5-a4bc-6470-7b6f-415e3f4077ff" UUID_SUB="a895900e-5585-8f31-7515-1ff7534e39d7" LABEL="vmh.cyber-range.lan:pv00" TYPE="linux_raid_member"
> /dev/sdb1: UUID="9b9546f9-25d2-42a6-835b-303f32aee4b1" TYPE="ext4"
> /dev/sdb2: UUID="e5f4acf5-a4bc-6470-7b6f-415e3f4077ff" UUID_SUB="6e20b5dd-0152-7f42-22a7-c17133fbce45" LABEL="vmh.cyber-range.lan:pv00" TYPE="linux_raid_member"
> /dev/sdd1: UUID="2nLjVF-sh3N-0qkm-aUQ1-jnls-3e8W-tUkBw5" TYPE="LVM2_member"
> /dev/md127: UUID="Mq1chn-6XhF-WCwF-LYhl-tZEz-Y8lq-8R2Ifq" TYPE="LVM2_member"
> /dev/mapper/onn_vmh-swap: UUID="1b0b9c91-22ed-41d1-aebf-e22fd9aa05d9" TYPE="swap"
> /dev/mapper/onn_vmh-ovirt--node--ng--4.2.7.1--0.20181216.0+1: UUID="b0e1c479-9696-4e19-b799-7f81236026b7" TYPE="ext4"
> /dev/mapper/onn_vmh-root: UUID="60905f5d-ed91-4ca9-9729-9a72a4678ddd" TYPE="ext4"
> /dev/mapper/onn_vmh-home: UUID="82a1d567-f8af-4b96-bfbf-5f79dff7384f" TYPE="ext4"
> /dev/mapper/onn_vmh-tmp: UUID="7dd9d3ae-3af7-4763-9683-19f583d8d15b" TYPE="ext4"
> /dev/mapper/onn_vmh-var: UUID="f206e030-876b-45a9-8a90-a0e54005b85c" TYPE="ext4"
> /dev/mapper/onn_vmh-var_log: UUID="b8a12f56-0818-416c-9fb7-33b48ef29eed" TYPE="ext4"
> /dev/mapper/onn_vmh-var_log_audit: UUID="bc78ad0c-9ab6-4f57-a69f-5b1ddf898552" TYPE="ext4"
> /dev/mapper/onn_vmh-var_crash: UUID="a941d416-4d7d-41ae-bcd4-8c1ec9d0f744" TYPE="ext4"
> /dev/sde1: UUID="44aa40d0-6c82-4e8e-8218-177e5c8474f4" TYPE="ext4"
>
> # pvscan
> PV /dev/md127 VG onn_vmh lvm2 [222.44 GiB / 43.66 GiB free]
> PV /dev/sdd1 VG gluster_vg3 lvm2 [<4.00 GiB / <2.00 GiB free]
> Total: 2 [<226.44 GiB] / in use: 2 [<226.44 GiB] / in no VG: 0 [0 ]
> Reading all physical volumes. This may take a while...
> Found volume group "onn_vmh" using metadata type lvm2
> Found volume group "gluster_vg3" using metadata type lvm2
>
> # lvscan
> ACTIVE '/dev/onn_vmh/pool00' [173.60 GiB] inherit
> ACTIVE '/dev/onn_vmh/root' [146.60 GiB] inherit
> ACTIVE '/dev/onn_vmh/home' [1.00 GiB] inherit
> ACTIVE '/dev/onn_vmh/tmp' [1.00 GiB] inherit
> ACTIVE '/dev/onn_vmh/var' [15.00 GiB] inherit
> ACTIVE '/dev/onn_vmh/var_log' [8.00 GiB] inherit
> ACTIVE '/dev/onn_vmh/var_log_audit' [2.00 GiB] inherit
> ACTIVE '/dev/onn_vmh/swap' [4.00 GiB] inherit
> inactive '/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0' [146.60 GiB] inherit
> ACTIVE '/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1' [146.60 GiB] inherit
> ACTIVE '/dev/onn_vmh/var_crash' [10.00 GiB] inherit
> ACTIVE '/dev/gluster_vg3/tmpLV' [2.00 GiB] inherit
>
> [/etc/lvm/backup/gluster_vg1]
> # Generated by LVM2 version 2.02.180(2)-RHEL7 (2018-07-20): Sat Dec 22 10:18:46 2018
>
> contents = "Text Format Volume Group"
> version = 1
>
> description = "Created *after* executing '/usr/sbin/lvcreate --virtualsize 500GB --name lv_datadisks -T gluster_vg1/lvthinpool'"
>
> creation_host = "vmh.cyber-range.lan" # Linux vmh.cyber-range.lan 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 29 14:49:43 UTC 2018 x86_64
> creation_time = 1545495526 # Sat Dec 22 10:18:46 2018
>
> gluster_vg1 {
> id = "TfNqtn-2eX6-i5gC-w4ye-h29n-5Zfy-UFHSvU"
> seqno = 9
> format = "lvm2" # informational
> status = ["RESIZEABLE", "READ", "WRITE"]
> flags = []
> extent_size = 2048 # 1024 Kilobytes
> max_lv = 0
> max_pv = 0
> metadata_copies = 0
>
> physical_volumes {
>
> pv0 {
> id = "DRfoKl-TUhb-cirx-Oz9P-mZEY-XoiB-mdLy6v"
> device = "/dev/sda" # Hint only
>
> status = ["ALLOCATABLE"]
> flags = []
> dev_size = 11718885376 # 5.45703 Terabytes
> pe_start = 2048
> pe_count = 5722111 # 5.45703 Terabytes
> }
> }
>
> logical_volumes {
>
> engine_lv {
> id = "P2ScEB-ws3V-iqVv-XTWh-Y2Jh-pg0c-6KnJXC"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> creation_time = 1545495460 # 2018-12-22 10:17:40 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 102400 # 100 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 0
> ]
> }
> }
>
> lvthinpool {
> id = "c0yaNn-DcaB-cYjj-9ZRv-M2Em-rzLW-qY9WpI"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 512000 # 500 Gigabytes
>
> type = "thin-pool"
> metadata = "lvthinpool_tmeta"
> pool = "lvthinpool_tdata"
> transaction_id = 2
> chunk_size = 2048 # 1024 Kilobytes
> discards = "passdown"
> zero_new_blocks = 1
> }
> }
>
> lv_vmdisks {
> id = "erpXRi-nPUq-mCf2-ga2J-3a0l-OWiC-i0xr8M"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> creation_time = 1545495493 # 2018-12-22 10:18:13 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 4613735 # 4.4 Terabytes
>
> type = "thin"
> thin_pool = "lvthinpool"
> transaction_id = 0
> device_id = 1
> }
> }
>
> lv_datadisks {
> id = "hKim3z-1QCh-dwhU-st2O-t4tG-wIss-UpLMZw"
> status = ["READ", "WRITE", "VISIBLE"]
> flags = []
> creation_time = 1545495526 # 2018-12-22 10:18:46 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 512000 # 500 Gigabytes
>
> type = "thin"
> thin_pool = "lvthinpool"
> transaction_id = 1
> device_id = 2
> }
> }
>
> lvol0_pmspare {
> id = "bHc0eC-Z4Ed-mV47-QTU4-SCvo-FWbE-L8NV7Q"
> status = ["READ", "WRITE"]
> flags = []
> creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 16192 # 15.8125 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 102400
> ]
> }
> }
>
> lvthinpool_tmeta {
> id = "WBut10-rAOP-FzA7-bJvr-ZdxL-lB70-jzz1Tv"
> status = ["READ", "WRITE"]
> flags = []
> creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 16192 # 15.8125 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 630592
> ]
> }
> }
>
> lvthinpool_tdata {
> id = "rwNZux-1fz1-dv8J-yN2j-LcES-f6ml-231td5"
> status = ["READ", "WRITE"]
> flags = []
> creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
> creation_host = "vmh.cyber-range.lan"
> segment_count = 1
>
> segment1 {
> start_extent = 0
> extent_count = 512000 # 500 Gigabytes
>
> type = "striped"
> stripe_count = 1 # linear
>
> stripes = [
> "pv0", 118592
> ]
> }
> }
> }
>
> }
>
>
> # cd /var/log
> # grep -ri gluster_vg1-lvthinpool-tpool
> messages-20190922:Sep 15 03:58:15 vmh lvm[14072]: Failed command for gluster_vg1-lvthinpool-tpool.
> messages:Sep 22 03:44:05 vmh lvm[14072]: Failed command for gluster_vg1-lvthinpool-tpool.
> messages-20190908:Sep 1 21:27:14 vmh lvm[14062]: Monitoring thin pool gluster_vg1-lvthinpool-tpool.
> messages-20190908:Sep 1 21:27:24 vmh lvm[14062]: WARNING: Thin pool gluster_vg1-lvthinpool-tpool data is now 100.00% full.
> messages-20190908:Sep 2 00:19:05 vmh lvm[14072]: Monitoring thin pool gluster_vg1-lvthinpool-tpool.
> messages-20190908:Sep 2 00:19:15 vmh lvm[14072]: WARNING: Thin pool gluster_vg1-lvthinpool-tpool data is now 100.00% full.
> messages-20190908:Sep 2 20:16:34 vmh lvm[14072]: Failed command for gluster_vg1-lvthinpool-tpool.
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPTNPPD3SQL...
5 years, 1 month
Re: Ovirt 4.2.7 won't start and drops to emergency console
by Strahil
Try to get all data in advance (before deactivating the VG).
I still can't imagine why the VG will disappear. Try with 'pvscan --cache' to redetect the PV.
Afrer all , all VG info is in the PVs' headers and should be visible no matter the VG is deactivated or not.
Best Regards,
Strahil NikolovOn Oct 2, 2019 02:08, jeremy_tourville(a)hotmail.com wrote:
>
> "lvs -a" does not list the logical volume I am missing.
> "lvdisplay -m /dev/gluster_vg1-lvthinpool-tpool_tmeta" does not work either. Error message is: Volume Group xxx not found. Cannot process volume group xxx."
>
> I am trying to follow the procedure from https://access.redhat.com/solutions/3251681
> I am on step #2 Step 2a and 2b work without issue. Step 2c give me an error. Here are the values I am using:
> # lvcreate -L 2G gluster_vg3 --name tmpLV
> # lvchange -ay gluster_vg3/tmpLV
> # lvconvert --thinpool gluster_vg1/lvthinpool --poolmetadata gluster_vg3/tmpLV
>
> VG name mismatch from position arg (gluster_vg1) and option arg (gluster_vg3)
>
> Do I need to create the LV on the same disk that failed? (gluster_vg1)
> Is creating a new LV on (gluster_vg3) ok for this situation?
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OJN6VVEMQY7...
5 years, 1 month
how to put or conditions in filters in web admin gui
by Gianluca Cecchi
Hello,
environment tin 4.3.6.
Suppose I'm in Web Admin GUI in Storage --> Disks and I want to get
displayed only the disks with "pattern1" together with the disks with
"string2" ("or" condition), limiting output to these two conditions, how
can I do it?
I tried some combinations without success
BTW: also the "and" condition seems not to work
Thanks,
Gianluca
5 years, 1 month
oVirt Hyperconverged disaster recovery questions
by wodel youchi
Hi,
oVirt Hyperconverged disaster recovery uses georeplication to replicate the
volume containing the VMs.
What I know about georeplication is that it is an asynchronous replication.
My questions are :
- How the replication of the VMs is done, are only the changes synchronized?
- What is the interval of this replication? can this interval be configured
taking into consideration the bandwidth of the replication link.
- How can the RPO be measured in the case of a georeplication?
Regards.
5 years, 1 month
Re: Cannot enable maintenance mode
by Bruno Martins
Hello guys,
No ideas for this issue?
Thanks for your cooperation!
Kind regards,
-----Original Message-----
From: Bruno Martins <bruno.o.martins(a)gfi.world>
Sent: 29 de setembro de 2019 16:16
To: users(a)ovirt.org
Subject: [ovirt-users] Cannot enable maintenance mode
Hello guys,
I am being unable to put a host from a two nodes cluster into maintenance mode in order to remove it from the cluster afterwards.
This is what I see in engine.log:
2019-09-27 16:20:58,364 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-45) [4cc251c9] Correlation ID: 4cc251c9, Job ID: 65731fbb-db34-49a9-ab56-9fba59bc0ee0, Call Stack: null, Custom Event ID: -1, Message: Host CentOS-H1 cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: Non interactive user (User: admin).
Host has been rebooted multiple times. vdsClient shows no VM's running.
What else can I do?
Kind regards,
Bruno Martins
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/X5FJWFW7GXN...
5 years, 1 month
Incremental backup using ovirt api
by smidhunraj@gmail.com
Hi,
I tried to take incremental backup of a vm using this script.
public function downloadDiskIncremental(){
$data=array();
$xmlStr = "<disk_attachment>
<disk id='1371afdd-91d4-4bc7-9792-0efbf6bbd1c9'>
<backup>incremental</backup>
</disk>
</disk_attachment>";
$curlParam=array(
"url" => "vms/4044e014-7e20-4dbc-abe5-64690ec45f63/diskattachments",
"method" => "POST",
"data" =>$xmlStr,
);
}
But it is throwing me error as
Array ( [status] => error [message] => For correct usage, see: https://ovirt.bobcares.com/ovirt-engine/api/v4/model#services/disk-attach...
Please help me with this issue....
5 years, 1 month
Re: Super Low VM disk IO via Shared Storage
by Amit Bawer
On Tue, Oct 1, 2019 at 12:49 PM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
wrote:
> Thank you very much Amit,
>
>
>
> I hope the result of suggested tests allows us improve the speed for
> specific IO test case as well.
>
>
>
> Apologies for not being more clear, but I was referring to changing mount
> options for storage where SHE also runs. It cannot be put in Maintenance
> mode since the engine is running on it.
> What to do in this case? Its clear that I need to power it down, but where
> can I then change the settings?
>
You can see similar question about changing mnt_options of hosted engine
and answer here [1]
[1] https://lists.ovirt.org/pipermail/users/2018-January/086265.html
>
>
> Kindly awaiting your reply.
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
>
>
>
>
>
>
> *From: *Amit Bawer <abawer(a)redhat.com>
> *Date: *Saturday, 28 September 2019 at 20:25
> *To: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
> *Cc: *Tony Brian Albers <tba(a)kb.dk>, "hunter86_bg(a)yahoo.com" <
> hunter86_bg(a)yahoo.com>, "users(a)ovirt.org" <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
>
>
>
>
> On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko <M.Vrgotic(a)activevideo.com>
> wrote:
>
> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=100000
>
> 100000+0 records in
>
> 100000+0 records out
>
> 51200000 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=100000
> *oflag=dsync*
>
> 100000+0 records in
>
> 100000+0 records out
>
> 51200000 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
> *Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>
>
>
> Taking into account your network configured MTU [1] and Linux version [2],
> you can tune wsize, rsize mount options.
>
> Editing mount options can be done from Storage->Domains->Manage Domain
> menu.
>
>
>
> [1] https://access.redhat.com/solutions/2440411
>
> [2] https://access.redhat.com/solutions/753853
>
>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=100000
>
> 100000+0 records in
>
> 100000+0 records out
>
> 409600000 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=100000 *oflag=dsync*
>
> 100000+0 records in
>
> 100000+0 records out
>
> 409600000 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=100000
>
> 100000+0 records in
>
> 100000+0 records out
>
> 819200000 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=100000 *oflag=dsync*
>
> 100000+0 records in
>
> 100000+0 records out
>
> 819200000 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer <abawer(a)redhat.com>
> *Cc: *Tony Brian Albers <tba(a)kb.dk>, "hunter86_bg(a)yahoo.com" <
> hunter86_bg(a)yahoo.com>, "users(a)ovirt.org" <users(a)ovirt.org>
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony’s test.
>
> We also have one setup with Gluster based NFS, and I will run tests on
> those as well.
>
> Sent from my iPhone
>
>
>
> On 25 Sep 2019, at 14:18, Amit Bawer <abawer(a)redhat.com> wrote:
>
>
>
>
>
> On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers <tba(a)kb.dk> wrote:
>
> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --------------------------snip----------------------
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=1000000
> 1000000+0 records in
> 1000000+0 records out
> 4096000000 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real 0m18.171s
> user 0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --------------------------snip----------------------
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
> 16.216.41)
>
>
>
> Worth to compare mount options with the slow shared NFSv4 mount.
>
>
>
> Window size tuning can be found at bottom of [1], although its relating to
> NFSv3, it could be relevant to v4 as well.
>
> [1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html
>
>
>
>
> connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
> SATA disks in RAID10. NFS server is running CentOS 7.6.
>
> Maybe you can get some inspiration from this.
>
> /tony
>
>
>
> On Wed, 2019-09-25 at 09:59 +0000, Vrgotic, Marko wrote:
> > Dear Strahil, Amit,
> >
> > Thank you for the suggestion.
> > Test result with block size 4096:
> > Network storage:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 409600000 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
> >
> > Local storage:
> >
> > avlocal2:
> > [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 409600000 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> > 10:38
> > avlocal3:
> > [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 409600000 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
> >
> > As Amit suggested, I am also going to execute same tests on the
> > BareMetals and between BareMetal and NFS to compare results.
> >
> >
> > — — —
> > Met vriendelijke groet / Kind regards,
> >
> > Marko Vrgotic
> >
> >
> >
> >
> > From: Strahil <hunter86_bg(a)yahoo.com>
> > Date: Tuesday, 24 September 2019 at 19:10
> > To: "Vrgotic, Marko" <M.Vrgotic(a)activevideo.com>, Amit <abawer@redhat
> > .com>
> > Cc: users <users(a)ovirt.org>
> > Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> > Storage
> >
> > Why don't you try with 4096 ?
> > Most block devices have a blcok size of 4096 and anything bellow is
> > slowing them down.
> > Best Regards,
> > Strahil Nikolov
> > On Sep 24, 2019 17:40, Amit Bawer <abawer(a)redhat.com> wrote:
> > have you reproduced performance issue when checking this directly
> > with the shared storage mount, outside the VMs?
> >
> > On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko <M.Vrgotic@activevideo
> > .com> wrote:
> > Dear oVirt,
> >
> > I have executed some tests regarding IO disk speed on the VMs,
> > running on shared storage and local storage in oVirt.
> >
> > Results of the tests on local storage domains:
> > avlocal2:
> > [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 51200000 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
> >
> > avlocal3:
> > [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 51200000 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
> >
> > Results of the test on shared storage domain:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=100000 oflag=dsync
> > 100000+0 records in
> > 100000+0 records out
> > 51200000 bytes (51 MB) copied, 283.499 s, 181 kB/s
> >
> > Why is it so low? Is there anything I can do to tune, configure VDSM
> > or other service to speed this up?
> > Any advice is appreciated.
> >
> > Shared storage is based on Netapp with 20Gbps LACP path from
> > Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is
> > NFS4.0.
> > oVirt is 4.3.4.3 SHE.
> >
> >
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: <
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> > y-guidelines/
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> > message/7XYSFEGAHCWXIY2JILDE24EVAC5ZVKWU/
> --
> Tony Albers - Systems Architect - IT Development
> Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark
> Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
>
>
5 years, 1 month
Re: [ANN] oVirt 4.3.6 is now generally available
by Strahil
You can go with 512 emulation and later you can recreate the brick without that emulation (if there are benefits of doing so).
After all, you gluster is either replica 2 arbiter 1 or a replica 3 volume.
Best Regards,
Strahil Nikolov
On Oct 1, 2019 09:26, Satheesaran Sundaramoorthi <sasundar(a)redhat.com> wrote:
>
> On Tue, Oct 1, 2019 at 11:27 AM Guillaume Pavese <guillaume.pavese(a)interactiv-group.com> wrote:
>>
>> Hi all,
>>
>> Sorry for asking again :/
>>
>> Is there any consensus on not using --emulate512 anymore while creating VDO volumes on Gluster?
>> Since this parameter can not be changed once the volume is created and we are nearing production setup. I would really like to have an official advice on this.
>>
>> Best,
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
> Hello Guillaume Pavese,
> If you are not using --emulate512 for VDO volume, then VDO volume will be created
> as 4K Native volume (with 4K block size).
>
> There are couple of things that bothers here:
> 1. 4K Native device support requires fixes at QEMU that will be part of
> CentOS 7.7.2 ( not yet available )
> 2. 4K Native support with VDO volumes on Gluster is not yet validated
> thoroughly.
>
> Based on the above items, it would be better you have emulate512=on or delay
> your production setup ( if possible, till above both items are addressed )
> to make use of 4K VDO volume.
>
> @Sahina Bose Do you have any other suggestions ?
>
> -- Satheesaran S (sas)
>>
>>
>> On Fri, Sep 27, 2019 at 3:19 PM Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
>>>
>>>
>>>
5 years, 1 month