[ovirt-users] Storage domain / new harddisk overwrites existing harddisk

Yaniv Dary ydary at redhat.com
Sun Aug 30 11:34:11 UTC 2015


Can you please add logs?

Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
        8272306
Email: ydary at redhat.com
IRC : ydary


On Tue, Aug 25, 2015 at 6:43 PM, Bernhard Krieger <bk at noremorze.at> wrote:

> Hello,
>
> after adding  a new vm/harddisk,  the existing disk of a running vm was
> overwritten.
> I do not know why this happened , so I need your help .
>
>
>
> * ovirt-engine
> - OS: CentOS 7
> - packages:
> ovirt-engine-sdk-python-3.5.2.1-1.el7.centos.noarch
> ovirt-engine-cli-3.5.0.6-1.el7.noarch
> ovirt-engine-lib-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-jboss-as-7.1.1-1.el7.x86_64
> ovirt-engine-setup-plugin-websocket-proxy-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-tools-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-restapi-3.5.3.1-1.el7.centos.noarch
> ovirt-image-uploader-3.5.1-1.el7.centos.noarch
> ovirt-host-deploy-java-1.3.1-1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-webadmin-portal-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-userportal-3.5.3.1-1.el7.centos.noarch
> ovirt-iso-uploader-3.5.2-1.el7.centos.noarch
> ovirt-engine-extensions-api-impl-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-dbscripts-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-setup-plugin-ovirt-engine-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-backend-3.5.3.1-1.el7.centos.noarch
> ovirt-host-deploy-1.3.1-1.el7.noarch
> ovirt-engine-setup-base-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-websocket-proxy-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-setup-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-3.5.3.1-1.el7.centos.noarch
> ovirt-engine-extension-aaa-ldap-1.0.2-1.el7.noarch
> ovirt-release35-005-1.noarch
>
>
> * 4 ovirt hosts,  risn-ovirt0 to risn-ovirt3
>
> - OS: Centos 7
> - packages
> ovirt-release35-005-1.noarch
> vdsm-xmlrpc-4.16.20-0.el7.centos.noarch
> vdsm-4.16.20-0.el7.centos.x86_64
> vdsm-jsonrpc-4.16.20-0.el7.centos.noarch
> vdsm-yajsonrpc-4.16.20-0.el7.centos.noarch
> vdsm-cli-4.16.20-0.el7.centos.noarch
> vdsm-python-zombiereaper-4.16.20-0.el7.centos.noarch
> vdsm-python-4.16.20-0.el7.centos.noarch
>
>
> * 1 Storagedomain called space0.dc1 ( Data Fibre channel )
> This storagedomain is attached and accessable to all ovirt hosts.
>
> Some details of the space0.dc1
>
> [root@]# multipath -ll
> 360060e80101e2500058be22000000bb8 dm-2 HITACHI ,DF600F
> size=550G features='0' hwhandler='0' wp=rw
> |-+- policy='service-time 0' prio=1 status=active
> | |- 2:0:1:1 sdh 8:112  active ready  running
> | `- 3:0:1:1 sdt 65:48  active ready  running
> `-+- policy='service-time 0' prio=0 status=enabled
>   |- 2:0:0:1 sdd 8:48   active ready  running
>   `- 3:0:0:1 sdp 8:240  active ready  running
>
>   --- Volume group ---
>   VG Name               bb5b6729-c654-4ebe-ba96-8ddc96154595
>   System ID
>   Format                lvm2
>   Metadata Areas        2
>   Metadata Sequence No  82
>   VG Access             read/write
>   VG Status             resizable
>   MAX LV                0
>   Cur LV                14
>   Open LV               3
>   Max PV                0
>   Cur PV                1
>   Act PV                1
>   VG Size               549.62 GiB
>   PE Size               128.00 MiB
>   Total PE              4397
>   Alloc PE / Size       1401 / 175.12 GiB
>   Free  PE / Size       2996 / 374.50 GiB
>   VG UUID               cE1V2A-ghtI-PldK-UNKH-d83q-tKg0-BdTZdy
>
> * vm elvira (exisiting server)
> OS: Linux
> Disk size: 30GB
> harddisk: space0.dc1
> Image id: c11f91e2-ebec-41ee-b3b4-ceb013a58743
>
> * vm betriebsserver (new one)
> OS: Windows
> disk size: 300GB
> harddisk: space0.dc1
> image id: e19c6c85-cfa6-4350-9a01-48d007f6f934
>
>
> I did the following steps:
>
> * extended the  storagedomain to 550GB on our storage system.
>
> I executed the following commands on every ovirt hosts:
> -  for letter in {a..z} ; do echo 1 >
> /sys/block/sd${letter}/device/rescan; done
> -  multipathd resize map 360060e80101e2500058be22000000bb8
> -  pvresize /dev/mapper/360060e80101e2500058be22000000bb8
>
> * After that i created a new server called "betriebsserver"
>
> * added a new harddisk with 300GB and attached it to the betriebsserver
>
> * installed the Windows OS.
>
> * At 13:39 i rebooted another vm called "elvira", but the server wont
> coming up, because the harddisk is missing.
>
> Logfile of risn-ovirt3 where elvira was running.
> Thread-449591::ERROR::2015-08-25
> 13:39:12,908::task::866::Storage.TaskManager.Task::(_setError)
> Task=`4b84a935-276e-441c-8c0b-3ddb809ce853`::Unexpected error
> Thread-449591::ERROR::2015-08-25
> 13:39:12,911::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': "Volume does not exist:
> ('c11f91e2-ebec-41ee-b3b4-ceb013a58743',)", 'code': 201}}
>
> The server was unable to boot due to missing harddisk.
> I tried it on every ovirt hosts, but no success.
>
>
> * i checked if the lvm exists on the risn-ovirt3
> [root at risn-ovirt3 vdsm]#  find / -name
> "*c11f91e2-ebec-41ee-b3b4-ceb013a58743*"
>
> /dev/bb5b6729-c654-4ebe-ba96-8ddc96154595/c11f91e2-ebec-41ee-b3b4-ceb013a58743
>
> /run/vdsm/storage/bb5b6729-c654-4ebe-ba96-8ddc96154595/d0299b2b-5c82-45d3-b45f-f23c7d89ed5f/c11f91e2-ebec-41ee-b3b4-ceb013a58743
>
> /run/udev/links/\x2fbb5b6729-c654-4ebe-ba96-8ddc96154595\x2fc11f91e2-ebec-41ee-b3b4-ceb013a58743
>
> * For testing purposes i copied the disk
> /dev/bb5b6729-c654-4ebe-ba96-8ddc96154595/c11f91e2-ebec-41ee-b3b4-ceb013a58743
> to a new testserver ( dd and netcat )
> When i started this testserver the server booted into windows OS instead
> of linux.
> So the harddisk from the betriebsserver has overwritten the the harddisk
> from elvira.
>
> * Output from risn-ovirt3:
> The id c11f91e2--ebec--41ee--b3b4--ceb013a58743 is used by elvira but the
> partitions are the same as from the betriebsserver.
>
> Disk
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-c11f91e2--ebec--41ee--b3b4--ceb013a58743:
> 32.2 GB, 32212254720 bytes, 62914560 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk label type: dos
> Disk identifier: 0xbb481efc
>
>
> Device Boot      Start         End      Blocks   Id  System
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-c11f91e2--ebec--41ee--b3b4--ceb013a58743p1
> 2048      514047      256000    7  HPFS/NTFS/exFAT
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-c11f91e2--ebec--41ee--b3b4--ceb013a58743p2
> 514048      718847      102400    6  FAT16
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-c11f91e2--ebec--41ee--b3b4--ceb013a58743p3
> 718848      980991      131072    6  FAT16
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-c11f91e2--ebec--41ee--b3b4--ceb013a58743p4
> *      980992   629143551   314081280    7  HPFS/NTFS/exFAT
>
> Disk
> /dev/mapper/9c015850--06ed--4e14--9b36--e5b1d371b6f6-a98c818c--03d6--44ab--93b9--76991a97f37c:
> 32.2 GB, 32212254720 bytes, 62914560 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk label type: dos
> Disk identifier: 0xbb481efc
>
>
> * Output from risn-ovirt0 which is running vm "betriebsserver".
>
> Disk
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-e19c6c85--cfa6--4350--9a01--48d007f6f934:
> 322.1 GB, 322122547200 bytes, 629145600 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk label type: dos
> Disk identifier: 0xbb481efc
>
>
> Device Boot      Start         End      Blocks   Id  System
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-e19c6c85--cfa6--4350--9a01--48d007f6f934p1
> 2048      514047      256000    7  HPFS/NTFS/exFAT
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-e19c6c85--cfa6--4350--9a01--48d007f6f934p2
> 514048      718847      102400    6  FAT16
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-e19c6c85--cfa6--4350--9a01--48d007f6f934p3
> 718848      980991      131072    6  FAT16
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-e19c6c85--cfa6--4350--9a01--48d007f6f934p4
> *      980992   629143551   314081280    7  HPFS/NTFS/exFAT
>
> Disk
> /dev/mapper/bb5b6729--c654--4ebe--ba96--8ddc96154595-a6def31d--9e7b--4ff4--b983--e26a495e0ce4:
> 2147 MB, 2147483648 bytes, 4194304 sectors
> Units = sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk label type: dos
> Disk identifier: 0x148e5b3c
>
>
> Has anybody an idea why this happened?
> Did i make a mistake when i extended the storage size?
> I can also provide the logfiles.
>
>
> Thx in advance
>
> regards
> Bernhard
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150830/b70ca978/attachment-0001.html>


More information about the Users mailing list