restapi
by qinglong.dong@horebdata.cn
This is a multi-part message in MIME format.
------=_001_NextPart508384553316_=----
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: base64
SGksIGFsbA0KICAgICAgICBJIGhhdmUgYSBjbG91ZCBmb3VuZHJ5IGVudmlyb25tZW50LiBJIGhh
dmUgbm90aWNlZCB0aGF0IGNsb3VkIGZvdW5kcnkgY2FuIHNjaGVkdWxlIHZpdHVhbCBtYWNoaW5l
cyBjcmVhdGVkIGJ5IHZtd2FyZSBvciBvcGVuc3RhY2sgdXNpbmcgcmVzdGFwaS4gT3ZpcnQgc3Vw
cG9ydHMgcmVzdGFwaSwgdG9vLiBTbyBJIHdhbnQgdG8ga25vdyBpZiBpdCBpcyBwb3NzaWJsZSB0
byBzY2hlZHVsZSB2aXR1YWwgbWFjaGluZXMgd2hpY2ggY3JlYXRlZCBieSBvdmlydCB1c2luZyBj
bG91ZCBmb3VuZHJ5LiBBbnlvbmUgY2FuIGhlbHA/IFRoYW5rcyENCg==
------=_001_NextPart508384553316_=----
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3Dus-ascii"><style>body { line-height: 1.5; }body { font-size: 10.5pt; f=
ont-family: ????; color: rgb(0, 0, 0); line-height: 1.5; }</style></head><=
body>Hi, all<div> <span style=3D"font-size: 10.5pt; line=
-height: 1.5; background-color: window;"> I have a </spa=
n><span style=3D"font-size: 10.5pt; line-height: 1.5; background-color: wi=
ndow;">cloud foundry environment. I have noticed that cloud foundry c=
an s</span><span style=3D"font-size: 10.5pt; line-height: 1.5; background-=
color: window;">chedule vitual machines created by vmware or openstack usi=
ng restapi. Ovirt supports restapi, too. So I want to know if it is possib=
le to </span><span style=3D"font-size: 10.5pt; line-height: 1.5; back=
ground-color: window;">s</span><span style=3D"font-size: 10.5pt; line-heig=
ht: 1.5; background-color: window;">chedule vitual machines which created =
by ovirt using </span><span style=3D"font-size: 10.5pt; line-height: =
1.5; background-color: window;">cloud foundry. Anyone can help? Thank=
s!</span></div></body></html>
------=_001_NextPart508384553316_=------
7 years, 8 months
Replicated Glusterfs on top of ZFS
by Arman Khalatyan
Hi,
I use 3 nodes with zfs and glusterfs.
Are there any suggestions to optimize it?
host zfs config 4TB-HDD+250GB-SSD:
[root@clei22 ~]# zpool status
pool: zclei22
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Feb 28 14:16:07 2017
config:
NAME STATE READ WRITE CKSUM
zclei22 ONLINE 0 0 0
HGST_HUS724040ALA640_PN2334PBJ4SV6T1 ONLINE 0 0 0
logs
lv_slog ONLINE 0 0 0
cache
lv_cache ONLINE 0 0 0
errors: No known data errors
Name:
GluReplica
Volume ID:
ee686dfe-203a-4caa-a691-26353460cc48
Volume Type:
Replicate (Arbiter)
Replica Count:
2 + 1
Number of Bricks:
3
Transport Types:
TCP, RDMA
Maximum no of snapshots:
256
Capacity:
3.51 TiB total, 190.56 GiB used, 3.33 TiB free
7 years, 8 months
Re: [ovirt-users] Storage balancer project
by nicolas@devels.es
Hi,
I added the LICENCE file [1] to the project.
Regards.
[1]:
https://github.com/nkovacne/ovirt-storage-balancer/blob/master/LICENCE
El 2017-03-07 06:36, Yaniv Kaul escribió:
> On Mon, Mar 6, 2017, 10:18 PM <nicolas(a)devels.es> wrote:
>
>> Hi,
>>
>> I implemented a storage domain balancer [1], i.e. a script that
>> tries to
>> keep all storage domains under a defined threshold. We use our
>> oVirt
>> infrastructure for teaching so have several hundreds of users and
>> cca.
>> 800 VMs at this time, so something like this is really needed
>> considering that when students create their VMs they tend to create
>> their disks under the same storage domain (normally the first
>> available
>> one). This results in one or two heavily used SDs and the rest
>> underused.
>>
>> I tried to keep it as simple as possible and doesn't have many
>> features,
>> but it seems to work (I tried it under our conditions: About 10
>> SDs,
>> relatively highly used). If someone want to improve something on
>> it,
>> pull requests are welcome.
>>
>> Might also be an inspiration on [2].
>>
>> Regards.
>>
>> [1]: https://github.com/nkovacne/ovirt-storage-balancer [1]
>
> Can you clarify the license?
> Y.
>
>> [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1331544 [2]
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users [3]
>
>
> Links:
> ------
> [1] https://github.com/nkovacne/ovirt-storage-balancer
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1331544
> [3] http://lists.ovirt.org/mailman/listinfo/users
7 years, 8 months
Storage balancer project
by nicolas@devels.es
Hi,
I implemented a storage domain balancer [1], i.e. a script that tries to
keep all storage domains under a defined threshold. We use our oVirt
infrastructure for teaching so have several hundreds of users and cca.
800 VMs at this time, so something like this is really needed
considering that when students create their VMs they tend to create
their disks under the same storage domain (normally the first available
one). This results in one or two heavily used SDs and the rest
underused.
I tried to keep it as simple as possible and doesn't have many features,
but it seems to work (I tried it under our conditions: About 10 SDs,
relatively highly used). If someone want to improve something on it,
pull requests are welcome.
Might also be an inspiration on [2].
Regards.
[1]: https://github.com/nkovacne/ovirt-storage-balancer
[2]: https://bugzilla.redhat.com/show_bug.cgi?id=1331544
7 years, 8 months
[ANN] oVirt 4.1.1 First Release Candidate
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Release Candidate of oVirt 4.1.1 for testing, as of March 3rd, 2017
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the first release candidate of the first in a series of
stabilization updates to the 4.1 series.
4.1.1 brings 26 enhancements and more than 350 bugfixes,
including more than 130 high or urgent
severity fixes, on top of oVirt 4.1 series
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Live has been already built [4]
- oVirt Node and Appliance has not been built for this release due to build
issues we're still investigating.
Additional Resources:
* Read more about the oVirt 4.1.1 release highlights:
http://www.ovirt.org/release/4.1.1/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.1/
[4] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 8 months
Re: [ovirt-users] [Gluster-users] Hot to force glusterfs to use RDMA?
by Arman Khalatyan
Dear Deepak, thank you for the hints, which gluster are you using?
As you can see from my previous email that the RDMA connection tested with
qperf. It is working as expected. In my case the clients are servers as
well, they are hosts for the ovirt. Disabling selinux is nor recommended by
ovirt, but i will give a try.
Am 03.03.2017 7:50 vorm. schrieb "Deepak Naidu" <dnaidu(a)nvidia.com>:
I have been testing glusterfs over RDMA & below is the command I use.
Reading up the logs, it looks like your IB(InfiniBand) device is not being
initialized. I am not sure if u have an issue on the client IB or the
storage server IB. Also have you configured ur IB devices correctly. I am
using IPoIB.
Can you check your firewall, disable selinux, I think, you might have
checked it already ?
*mount -t glusterfs -o transport=rdma storageN1:/vol0 /mnt/vol0*
· *The below error seems if you have issue starting your volume. I
had issue, when my transport was set to tcp,rdma. I had to force start my
volume. If I had set it only to tcp on the volume, the volume would start
easily.*
[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc]
0-GluReplica-client-2: failed to initialize RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
failed, review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066]
[graph.c:324:glusterfs_graph_init]
0-GluReplica-client-2: initializing translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176]
[graph.c:673:glusterfs_graph_activate]
0-graph: init failed
· *The below error seems if you have issue with IB device. If not
configured properly.*
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
--
Deepak
*From:* gluster-users-bounces(a)gluster.org [mailto:gluster-users-bounces@
gluster.org] *On Behalf Of *Sahina Bose
*Sent:* Thursday, March 02, 2017 10:26 PM
*To:* Arman Khalatyan; gluster-users(a)gluster.org; Rafi Kavungal Chundattu
Parambil
*Cc:* users
*Subject:* Re: [Gluster-users] [ovirt-users] Hot to force glusterfs to use
RDMA?
[Adding gluster users to help with error]
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
On Thu, Mar 2, 2017 at 5:36 PM, Arman Khalatyan <arm2arm(a)gmail.com> wrote:
BTW RDMA is working as expected:
root@clei26 ~]# qperf clei22.vib tcp_bw tcp_lat
tcp_bw:
bw = 475 MB/sec
tcp_lat:
latency = 52.8 us
[root@clei26 ~]#
thank you beforehand.
Arman.
On Thu, Mar 2, 2017 at 12:54 PM, Arman Khalatyan <arm2arm(a)gmail.com> wrote:
just for reference:
gluster volume info
Volume Name: GluReplica
Type: Replicate
Volume ID: ee686dfe-203a-4caa-a691-26353460cc48
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp,rdma
Bricks:
Brick1: 10.10.10.44:/zclei22/01/glu
Brick2: 10.10.10.42:/zclei21/01/glu
Brick3: 10.10.10.41:/zclei26/01/glu (arbiter)
Options Reconfigured:
network.ping-timeout: 30
server.allow-insecure: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.data-self-heal-algorithm: full
features.shard: on
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
nfs.disable: on
[root@clei21 ~]# gluster volume status
Status of volume: GluReplica
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------
------------------
Brick 10.10.10.44:/zclei22/01/glu 49158 49159 Y
15870
Brick 10.10.10.42:/zclei21/01/glu 49156 49157 Y
17473
Brick 10.10.10.41:/zclei26/01/glu 49153 49154 Y
18897
Self-heal Daemon on localhost N/A N/A Y
17502
Self-heal Daemon on 10.10.10.41 N/A N/A Y
13353
Self-heal Daemon on 10.10.10.44 N/A N/A Y
32745
Task Status of Volume GluReplica
------------------------------------------------------------
------------------
There are no active volume tasks
On Thu, Mar 2, 2017 at 12:52 PM, Arman Khalatyan <arm2arm(a)gmail.com> wrote:
I am not able to mount with RDMA over cli....
Are there some volfile parameters needs to be tuned?
/usr/bin/mount -t glusterfs -o backup-volfile-servers=10.10.
10.44:10.10.10.42:10.10.10.41,transport=rdma 10.10.10.44:/GluReplica /mnt
[2017-03-02 11:49:47.795511] I [MSGID: 100030] [glusterfsd.c:2454:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.9
(args: /usr/sbin/glusterfs --volfile-server=10.10.10.44
--volfile-server=10.10.10.44 --volfile-server=10.10.10.42
--volfile-server=10.10.10.41 --volfile-server-transport=rdma
--volfile-id=/GluReplica.rdma /mnt)
[2017-03-02 11:49:47.812699] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker]
0-epoll: Started thread with index 1
[2017-03-02 11:49:47.825210] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker]
0-epoll: Started thread with index 2
[2017-03-02 11:49:47.828996] W [MSGID: 103071]
[rdma.c:4589:__gf_rdma_ctx_create]
0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device]
[2017-03-02 11:49:47.829067] W [MSGID: 103055] [rdma.c:4896:init]
0-GluReplica-client-2: Failed to initialize IB Device
[2017-03-02 11:49:47.829080] W [rpc-transport.c:354:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
[2017-03-02 11:49:47.829272] W [rpc-clnt.c:1070:rpc_clnt_connection_init]
0-GluReplica-client-2: loading of new rpc-transport failed
[2017-03-02 11:49:47.829325] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy]
0-GluReplica-client-2: size=588 max=0 total=0
[2017-03-02 11:49:47.829371] I [MSGID: 101053]
[mem-pool.c:641:mem_pool_destroy]
0-GluReplica-client-2: size=124 max=0 total=0
[2017-03-02 11:49:47.829391] E [MSGID: 114022] [client.c:2530:client_init_rpc]
0-GluReplica-client-2: failed to initialize RPC
[2017-03-02 11:49:47.829413] E [MSGID: 101019] [xlator.c:433:xlator_init]
0-GluReplica-client-2: Initialization of volume 'GluReplica-client-2'
failed, review your volfile again
[2017-03-02 11:49:47.829425] E [MSGID: 101066]
[graph.c:324:glusterfs_graph_init]
0-GluReplica-client-2: initializing translator failed
[2017-03-02 11:49:47.829436] E [MSGID: 101176]
[graph.c:673:glusterfs_graph_activate]
0-graph: init failed
[2017-03-02 11:49:47.830003] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x3c1) [0x7f524c9dbeb1]
-->/usr/sbin/glusterfs(glusterfs_process_volfp+0x172) [0x7f524c9d65d2]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f524c9d5b4b] ) 0-:
received signum (1), shutting down
[2017-03-02 11:49:47.830053] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting
'/mnt'.
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f524c9d5cd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f524c9d5b4b] ) 0-: received signum (15), shutting down
[2017-03-02 11:49:47.831014] W [glusterfsd.c:1327:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f524b343dc5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f524c9d5cd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
[0x7f524c9d5b4b] ) 0-: received signum (15), shutting down
On Thu, Mar 2, 2017 at 12:11 PM, Sahina Bose <sabose(a)redhat.com> wrote:
You will need to pass additional mount options while creating the storage
domain (transport=rdma)
Please let us know if this works.
On Thu, Mar 2, 2017 at 2:42 PM, Arman Khalatyan <arm2arm(a)gmail.com> wrote:
Hi,
Are there way to force the connections over RDMA only?
If I check host mounts I cannot see rdma mount option:
mount -l| grep gluster
10.10.10.44:/GluReplica on
/rhev/data-center/mnt/glusterSD/10.10.10.44:_GluReplica
type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,
allow_other,max_read=131072)
I have glusterized 3 nodes:
GluReplica
Volume ID:
ee686dfe-203a-4caa-a691-26353460cc48
Volume Type:
Replicate (Arbiter)
Replica Count:
2 + 1
Number of Bricks:
3
Transport Types:
TCP, RDMA
Maximum no of snapshots:
256
Capacity:
3.51 TiB total, 190.56 GiB used, 3.33 TiB free
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
------------------------------
This email message is for the sole use of the intended recipient(s) and may
contain confidential information. Any unauthorized review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of the
original message.
------------------------------
7 years, 8 months
oVirt Nested Virtualization
by Luca 'remix_tj' Lorenzetto
Hello,
we're installing a small ovirt cluster for hosting some test
environments at work. We're interested in testing VM running
hypervisors (i.e ovirt itself or a small openstack deployment).
Is there any documentation showing how to enable nested virtualization in ovirt?
I've seen that while adding an host to the cluster there is the
opportunity to flag "Nested Virtualization" option, but i didn't
understand if other packages and configs are required.
Our environment is running latest ovirt 4.1 installed with ovirt-node-ng.
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
7 years, 8 months
VM Permissions (3.6)
by Alexis HAUSER
------=_Part_2283923_195487747.1488714679596
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
hi, I'm trying to figure out how to manage VM permissions with ovirt.
>From what I've understood, if you add a user to user role in the system preferences, this user can access every VM and resources on the cluster, with the associated permissions; right ?
Now, if I want to control who has access to each VM : I musn't add this user to user role from the system tab; but instead add it on each resources (like on each VM) it should access ?
Is there another way to manage permissions ? How you guys do personally manage this ? Do you automate it with scripts ?
Thanks for you ideas and suggestions
(using 3.6)
------=_Part_2283923_195487747.1488714679596
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>hi, I'm trying to figure out how to manage VM permissions with ovirt.<br></div><div>From what I've understood, if you add a user to user role in the system preferences, this user can access every VM and resources on the cluster, with the associated permissions; right ?<br></div><div>Now, if I want to control who has access to each VM : I musn't add this user to user role from the system tab; but instead add it on each resources (like on each VM) it should access ?<br></div><div><br></div><div>Is there another way to manage permissions ? How you guys do personally manage this ? Do you automate it with scripts ?<br></div><div><br></div><div>Thanks for you ideas and suggestions<br></div><div><br></div><div>(using 3.6)<br></div></div></body></html>
------=_Part_2283923_195487747.1488714679596--
7 years, 8 months
Ovirt importing VM
by rightkicktech.gmail.com
------SZR5TSRASD8ALPWG0P8ZQNHBNKKQYO
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
I am using Virt4=2E1 on top glusterfs=2E
I have defined two data domains, one for engine and one for vms=2E Also ex=
port and iso domains are on separate dedicated gluster volumes=2E
When I try to import a VM from the export domain the VM is imported in the=
engine data domain=2E I would expect to be able to select on which data do=
main to import it=2E Am I missing sth?
Thanx,
Alex
--=20
Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E
------SZR5TSRASD8ALPWG0P8ZQNHBNKKQYO
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,<br>
<br>
I am using Virt4=2E1 on top glusterfs=2E<br>
I have defined two data domains, one for engine and one for vms=2E Also ex=
port and iso domains are on separate dedicated gluster volumes=2E<br>
<br>
When I try to import a VM from the export domain the VM is imported in the=
engine data domain=2E I would expect to be able to select on which data do=
main to import it=2E Am I missing sth?<br>
<br>
Thanx,<br>
Alex<br>
<br>
-- <br>
Sent from my Android device with K-9 Mail=2E Please excuse my brevity=2E
------SZR5TSRASD8ALPWG0P8ZQNHBNKKQYO--
7 years, 8 months
Re: [ovirt-users] lvscan on rhv host
by Nir Soffer
On Mon, Mar 6, 2017 at 1:03 AM, Marcin Kruk <askifyouneed(a)gmail.com> wrote:
Please reply to the list, other users may find this discussion valuable.
> 1. perfect and clear explanation
> 2. Could you explain "These are probably OVF_STORE volumes, used to keep vms
> ovs.", what is vms ovs,
> and what valuable information is in te ovf file which you extracted?
OVF is standard format for describing a VM:
https://en.wikipedia.org/wiki/Open_Virtualization_Format
All the VM that have a disk on a storage domain are stored in
the OVF_STORE disks using OVF format, so you can restore
all the VM from storage if you lost your engine database.
Or, if you want to move entire storage domain from one setup to another,
you can detach the storage domain from one setup, attach to the other
setup, and import all the VMs.
The OVF is also used by hosted engine to obtain engine vm setup from
storage, and start the engine vm after a host is rebooted.
> 3. perfect and clear
> Thank you
>
> 2017-03-05 12:32 GMT+01:00 Nir Soffer <nsoffer(a)redhat.com>:
>>
>> On Sun, Mar 5, 2017 at 10:08 AM, Marcin Kruk <askifyouneed(a)gmail.com>
>> wrote:
>> > VDSM side, means RHV HOST side, right?
>> > I missed LVM tags, so the command:
>> > lvs -o vg_name,name,lv_tags is very helpful, thanks.
>> >
>> > 1. Is it a tool in command line to show: disk ID, alias, atached to
>> > fields?
>>
>> No, you need to extract this info manually.
>>
>> Here is an example
>>
>> # lvs -o lv_name,size,attr,tags aed577ea-d1ca-4ebe-af80-f852c7ce59bb
>> LV LSize Attr LV Tags
>> 91799827-85b8-450d-a521-42de12fa08e6 1.00g -wi-ao----
>>
>> IU_db343be2-f709-4835-b1c5-a1bbdb650b2a,MD_8,PU_93331705-46be-4cb8-9dc2-c1559843fd4a
>> 93331705-46be-4cb8-9dc2-c1559843fd4a 3.00g -wi-ao----
>>
>> IU_239d9e96-ad7d-4a7d-83af-d593877db11b,MD_7,PU_00000000-0000-0000-0000-000000000000
>> dfbf69fc-3371-42d4-8298-415a6ad4244a 128.00m -wi-------
>>
>> IU_7324cafc-905f-475e-8217-1a919bcca9e9,MD_5,PU_00000000-0000-0000-0000-000000000000
>> ea556b30-e62b-4d66-b832-459a5dd01890 128.00m -wi-------
>>
>> IU_aab45dbc-c016-4990-b834-ce455bbc4fef,MD_4,PU_00000000-0000-0000-0000-000000000000
>> ids 128.00m -wi-ao----
>> inbox 128.00m -wi-a-----
>> leases 2.00g -wi-a-----
>> master 1.00g -wi-ao----
>> metadata 512.00m -wi-a-----
>> outbox 128.00m -wi-a-----
>> xleases 1.00g -wi-a-----
>>
>> What is dfbf69fc-3371-42d4-8298-415a6ad4244a?
>>
>> MD_5 means this volume metadata is at offset 5 * 512 in
>> 143dc2d0-8e1a-4b95-a306-2a26a7b4832f metadata
>>
>> # dd if=/dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/metadata bs=512 count=1
>> skip=5
>>
>> # dd if=/dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/metadata bs=512 count=1
>> skip=5
>> DOMAIN=aed577ea-d1ca-4ebe-af80-f852c7ce59bb
>> CTIME=1487525999
>> FORMAT=RAW
>> DISKTYPE=2
>> LEGALITY=LEGAL
>> SIZE=262144
>> VOLTYPE=LEAF
>> DESCRIPTION={"Updated":true,"Size":30720,"Last Updated":"Fri Feb 24
>> 20:49:24 EST 2017","Storage
>> Domains":[{"uuid":"aed577ea-d1ca-4ebe-af80-f852c7ce59bb"}],"Disk
>> Description":"OVF_STORE"}
>> IMAGE=7324cafc-905f-475e-8217-1a919bcca9e9
>> PUUID=00000000-0000-0000-0000-000000000000
>> MTIME=0
>> POOL_UUID=
>> TYPE=PREALLOCATED
>> GEN=0
>> EOF
>> 1+0 records in
>> 1+0 records out
>> 512 bytes (512 B) copied, 0.00146441 s, 350 kB/s
>>
>> So this is OVF_STORE disk, see bellow
>>
>> >
>> > 2. I found information about metadata, outbox etc volumes,
>> > but I can not find info about lvm volumes with 128 MiB size.
>> > Are these any metadata volumes? ex: from lvscan:
>> > ACTIVE
>> >
>> > '/dev/b41214a0-4748-47a8-85f2-9ad1e573ab10/57e904c5-3267-4d94-a52c-babb06912d90'
>> > [128.00 MiB] inherit' [128.00 MiB] inherit?
>>
>> These are probably OVF_STORE volumes, used to keep vms ovs.
>>
>> You can extract the data from these volumes like this:
>>
>> lvchange -ay
>> aed577ea-d1ca-4ebe-af80-f852c7ce59bb/dfbf69fc-3371-42d4-8298-415a6ad4244a
>>
>> # tar tvf
>> /dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/dfbf69fc-3371-42d4-8298-415a6ad4244a
>> -rw-r--r-- 0/0 138 2017-02-25 03:49 info.json
>> -rw-r--r-- 0/0 11068 2017-02-25 03:49
>> 55e347d5-4074-4408-b782-c43011483845.ovf
>> -rw-r--r-- 0/0 10306 2017-02-25 03:49
>> bcb03d22-0799-4b0e-bcb3-f69954e6f5eb.ovf
>>
>> # tar xvf
>> /dev/aed577ea-d1ca-4ebe-af80-f852c7ce59bb/dfbf69fc-3371-42d4-8298-415a6ad4244a
>> bcb03d22-0799-4b0e-bcb3-f69954e6f5eb.ovf
>> bcb03d22-0799-4b0e-bcb3-f69954e6f5eb.ovf
>>
>> # dd if=bcb03d22-0799-4b0e-bcb3-f69954e6f5eb.ovf bs=512 count=1
>> <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope
>> xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1/"
>>
>> xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationS..."
>>
>> xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettin..."
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>> ovf:version="4.1.0.0"><References><File
>>
>> ovf:href="239d9e96-ad7d-4a7d-83af-d593877db11b/93331705-46be-4cb8-9dc2-c1559843fd4a"
>> ovf:id="93331705-46be-4cb8-9dc2-c15598431+0 records in
>> 1+0 records out
>> 512 bytes (512 B) copied, 8.0163e-05 s, 6.4 MB/s
>>
>> >
>> > 3. Are they removed? When, for example, the virtual machine or disk will
>> > be
>> > deleted ?
>>
>> Yes, we create an lv when you create a disk or create snapshot, and remove
>> them
>> when you delete vm, disk, or snapshot.
>>
>> Nir
>
>
7 years, 8 months