help, manager hard drive died (3.3.4)
by David Smith
So the ovirt manager hard disk died, 3.3.4 version, i can't get it to spin
up to get any data off.
I have an old copy of the manager database which was living on the manager
machine.
What is the best procedure for restoring my ovirt setup? I've not been able
to import the data store, only export and iso in the past.
Thanks in advance,
David
10 years, 5 months
ISO
by Moritz Mlynarek
Hello,
after installing ovirt 3.4 I tried to add an ISO File to my ovirt system.
But there was no local-iso-share.
The command "engine-iso-uploader list" told me "ERROR: There are no ISO
storage domains". I tried to create a new domain (ISO / Local on Host) but
I was not able to choose a HOST.
Sorry for my poor english. :(
Moritz Mlynarek
10 years, 5 months
ISO_DOMAIN can't be attached
by Peter Haraldson
Hi
I have one Engine, one node.
Fresh install of oVirt 3.4 on new CentOS 6.5.
Engine holds ISO_DOMAIN, created automatically during setup.
The ISO_DOMAIN exists under Default -> Storage. When trying to attach
it, it is visible in web interface a few seconds as "locked", then
disappears.
Log says:
ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(org.ovirt.thread.pool-6-thread-24) [5a5288b3] The connection with
details ctmgm0.certitrade.net:/ct/isos failed because of error code 477
and error message is: problem while trying to mount target (thanks for
that error message... )
And a bit down:
ERROR
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-6-thread-27) [43f2f913] Command
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
AttachStorageDomainVDS, error = Storage domain does not exist:
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
I can, however, mount the ISO_DOMAIN manually, it has correct
permissions and a directory structure obviously created by oVirt (see
below).
There is no problem adding nfs storage from the node.
iptables is off, no firewalld, selinux is permissive.
ISO_DOMAIN mounted on engine ("mount -t nfs 172.19.1.10:/ct/isos/
/mnt/tmp/")
ls -l /mnt/
totalt 4
drwxr-xr-x. 3 vdsm kvm 4096 10 jun 19.18 tmp
ls -l /mnt/tmp/0f6485ab-0301-4989-a59a-56efcd447ba0/images/
totalt 4
drwxr-xr-x. 2 vdsm kvm 4096 10 jun 19.19
11111111-1111-1111-1111-111111111111
I have read many posts about this problem, seems to be a common one, but
no solution found.
Complete log from one try:
2014-06-11 15:48:23,513 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Running command:
AttachStorageDomainToPoolCommand internal: false. Entities affected :
ID: 0f6485ab-0301-4989-a59a-56efcd447ba0 Type: Storage
2014-06-11 15:48:23,525 INFO
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-6-thread-23) [795f995f] Running command:
ConnectStorageToVdsCommand internal: true. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: System
2014-06-11 15:48:23,533 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-6-thread-23) [795f995f] START,
ConnectStorageServerVDSCommand(HostName = ExtTest, HostId =
1947b88f-b02e-4acf-bb30-1d92de626b45, storagePoolId =
00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList
= [{ id: 53c268c3-5b04-4d42-bfa3-58d31c982a5d, connection:
ctmgm0.certitrade.net:/ct/isos, iqn: null, vfsType: null, mountOptions:
null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id:
5107b953
2014-06-11 15:48:23,803 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-23) [795f995f] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: Failed to connect Host
ExtTest to the Storage Domains ISO_DOMAIN.
2014-06-11 15:48:23,805 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-6-thread-23) [795f995f] FINISH,
ConnectStorageServerVDSCommand, return:
{53c268c3-5b04-4d42-bfa3-58d31c982a5d=477}, log id: 5107b953
2014-06-11 15:48:23,808 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-23) [795f995f] Correlation ID: null,
Call Stack: null, Custom Event ID: -1, Message: The error message for
connection ctmgm0.certitrade.net:/ct/isos returned by VDSM was: Problem
while trying to mount target
2014-06-11 15:48:23,810 ERROR
[org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(org.ovirt.thread.pool-6-thread-23) [795f995f] The connection with
details ctmgm0.certitrade.net:/ct/isos failed because of error code 477
and error message is: problem while trying to mount target
2014-06-11 15:48:23,814 ERROR
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-6-thread-23) [795f995f] Transaction rolled-back
for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2014-06-11 15:48:23,816 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-28)
[7b977e41] START, AttachStorageDomainVDSCommand( storagePoolId =
00000002-0002-0002-0002-00000000011b, ignoreFailoverLimit = false,
storageDomainId = 0f6485ab-0301-4989-a59a-56efcd447ba0), log id: a2cca1b
2014-06-11 15:48:24,515 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-28)
[7b977e41] Failed in AttachStorageDomainVDS method
2014-06-11 15:48:24,547 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(org.ovirt.thread.pool-6-thread-28) [7b977e41]
IrsBroker::Failed::AttachStorageDomainVDS due to: IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
AttachStorageDomainVDS, error = Storage domain does not exist:
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358
2014-06-11 15:48:24,555 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-28)
[7b977e41] FINISH, AttachStorageDomainVDSCommand, log id: a2cca1b
2014-06-11 15:48:24,556 ERROR
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Command
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to
AttachStorageDomainVDS, error = Storage domain does not exist:
('0f6485ab-0301-4989-a59a-56efcd447ba0',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2014-06-11 15:48:24,559 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Command
[id=bff69195-7d00-4e18-bf5b-705be8d7210f]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
snapshot: storagePoolId = 00000002-0002-0002-0002-00000000011b,
storageId = 0f6485ab-0301-4989-a59a-56efcd447ba0.
2014-06-11 15:48:24,580 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-28) [7b977e41] Correlation ID: 7b977e41,
Job ID: fafb4906-a9dd-41fb-b17f-833d831e953e, Call Stack: null, Custom
Event ID: -1, Message: Failed to attach Storage Domain ISO_DOMAIN to
Data Center Default. (User: admin)
Regards
Peter Haraldson
10 years, 5 months
Call for Papers Deadline in One Month: Linux.conf.au
by Brian Proffitt
Conference: Linux.conf.au
Information: Each year open source geeks from across the globe gather in Australia or New Zealand to meet their fellow technologists, share the latest ideas and innovations, and spend a week discussing and collaborating on open source projects. The conference is well known for the speakers and delegates depth of talent, and its focus on technical linux content.
Possible topics: Virtualization, oVirt, KVM, libvirt, RDO, OpenStack, Foreman
Date: January 12-15, 2015
Location: Auckland, New Zealand
Website: http://lca2015.linux.org.au/
Call for Papers Deadline: July 13, 2014
Call for Papers URL: http://lca2015.linux.org.au/cfp
Contact me for more information and assistance with presentations.
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 5 months
Feature GlusterVolumeSnapshots not in 3.5?
by Jorick Astrego
This is a multi-part message in MIME format.
--------------010507090700090700050404
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Hi again,
After reading up on all the backup possibilities of our ovirt cluster
(we now do in VM backup with our traditional backup software), I came
across http://www.ovirt.org/Features/GlusterVolumeSnapshots
*Name*: Gluster Volume Snapshot
*Modules*: engine
*Target version*: 3.5
*Status*: Not Started
*Last updated*: 2014-01-21 by Shtripat
It was originally planned for 3.5 but I haven't seen it on the oVirt
Planning & Tracking document on google docs. It looks like a great
feature, is this still planned somewhere in the future?
Kind regards,
Jorick Astrego
Netbulae BV
--------------010507090700090700050404
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi again,<br>
<br>
After reading up on all the backup possibilities of our ovirt
cluster (we now do in VM backup with our traditional backup
software), I came across
<a class="moz-txt-link-freetext" href="http://www.ovirt.org/Features/GlusterVolumeSnapshots">http://www.ovirt.org/Features/GlusterVolumeSnapshots</a><br>
<br>
<b>Name</b>: Gluster Volume Snapshot<br>
<b>Modules</b>: engine<br>
<b>Target version</b>: 3.5<br>
<b>Status</b>: Not Started<br>
<b>Last updated</b>: 2014-01-21 by Shtripat
<br>
<br>
It was originally planned for 3.5 but I haven't seen it on the <span
data-tooltip="oVirt Planning & Tracking" aria-disabled="true"
aria-label="Documenttitel: oVirt Planning & Tracking" style=""
class="docs-title docs-title-disabled" id="docs-title"
role="button">oVirt Planning & Tracking document on google
docs. It looks like a great feature, is this still planned
somewhere in the future?<br>
<br>
Kind regards,<br>
<br>
Jorick Astrego<br>
Netbulae BV<br>
</span>
</body>
</html>
--------------010507090700090700050404--
10 years, 5 months
Sanlock issues after upgrading to 3.4
by Jairo Rizzo
Hello,
I have a small 2-node cluster setup running Glusterfs in replication mode :
CentOS v6.5
kernel-2.6.32-431.17.1.el6.x86_64
vdsm-4.14.6-0.el6.x86_64
ovirt-engine-3.4.0-1.el6.noarch (on 1 node)
Basically I was running ovirt-engine v 3.3 for months fine and then
upgraded to latest version of 3.3.X two days ago and could not join the
nodes to the cluster due to a version mismatch,basically this:
https://www.mail-archive.com/users@ovirt.org/msg17241.html . After trying
to correct this problem I ended up upgrading to 3.4 which created a new and
challenng problem for me. Every couple of hours I get error messages like
this:
Jun 7 13:40:01 hv1 sanlock[2341]: 2014-06-07 13:40:01-0400 19647 [2341]:
s3 check_our_lease warning 70 last_success 19577
Jun 7 13:40:02 hv1 sanlock[2341]: 2014-06-07 13:40:02-0400 19648 [2341]:
s3 check_our_lease warning 71 last_success 19577
Jun 7 13:40:03 hv1 sanlock[2341]: 2014-06-07 13:40:03-0400 19649 [2341]:
s3 check_our_lease warning 72 last_success 19577
Jun 7 13:40:04 hv1 sanlock[2341]: 2014-06-07 13:40:04-0400 19650 [2341]:
s3 check_our_lease warning 73 last_success 19577
Jun 7 13:40:05 hv1 sanlock[2341]: 2014-06-07 13:40:05-0400 19651 [2341]:
s3 check_our_lease warning 74 last_success 19577
Jun 7 13:40:06 hv1 sanlock[2341]: 2014-06-07 13:40:06-0400 19652 [2341]:
s3 check_our_lease warning 75 last_success 19577
Jun 7 13:40:07 hv1 sanlock[2341]: 2014-06-07 13:40:07-0400 19653 [2341]:
s3 check_our_lease warning 76 last_success 19577
Jun 7 13:40:08 hv1 sanlock[2341]: 2014-06-07 13:40:08-0400 19654 [2341]:
s3 check_our_lease warning 77 last_success 19577
Jun 7 13:40:09 hv1 wdmd[2330]: test warning now 19654 ping 19644 close 0
renewal 19577 expire 19657 client 2341
sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
Jun 7 13:40:09 hv1 wdmd[2330]: /dev/watchdog closed unclean
Jun 7 13:40:09 hv1 kernel: SoftDog: Unexpected close, not stopping
watchdog!
Jun 7 13:40:09 hv1 sanlock[2341]: 2014-06-07 13:40:09-0400 19655 [2341]:
s3 check_our_lease warning 78 last_success 19577
Jun 7 13:40:10 hv1 wdmd[2330]: test warning now 19655 ping 19644 close
19654 renewal 19577 expire 19657 client 2341
sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
Jun 7 13:40:10 hv1 sanlock[2341]: 2014-06-07 13:40:10-0400 19656 [2341]:
s3 check_our_lease warning 79 last_success 19577
Jun 7 13:40:11 hv1 wdmd[2330]: test warning now 19656 ping 19644 close
19654 renewal 19577 expire 19657 client 2341
sanlock_1e8615b0-7876-4a03-bdb0-352087fad0f3:1
Jun 7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657 [2341]:
s3 check_our_lease failed 80
Jun 7 13:40:11 hv1 sanlock[2341]: 2014-06-07 13:40:11-0400 19657 [2341]:
s3 all pids clear
Jun 7 13:40:11 hv1 wdmd[2330]: /dev/watchdog reopen
Jun 7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738 [5050]:
s3 delta_renew write error -202
Jun 7 13:41:32 hv1 sanlock[2341]: 2014-06-07 13:41:32-0400 19738 [5050]:
s3 renewal error -202 delta_length 140 last_success 19577
Jun 7 13:41:42 hv1 sanlock[2341]: 2014-06-07 13:41:42-0400 19748 [5050]:
1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
Jun 7 13:41:52 hv1 sanlock[2341]: 2014-06-07 13:41:52-0400 19758 [5050]:
1e8615b0 close_task_aio 0 0x7fd3040008c0 busy
This makes one of the nodes not be able to see the storage and all its VMs
will go into pause mode/stop. Wondering if you could provide some advice.
Thank you
--Rizzo
10 years, 5 months
oVirt 3.4.2 Hosted
by Bob Doolittle
Hi,
I'm taking a preview look at the 3.4.2 Release Notes. I see there is no
mention of Hosted.
Can we assume that means that the process for upgrading Hosted is the
same as for traditional deployments? We know that installation is quite
different...
-Bob
10 years, 5 months
FW: Storage Domain Connnection inquiry
by shawn o'connor
--_d6ec6831-b037-42a6-844f-a580f63d10c2_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
> > Hello=2C
> >=20
> > We have recently started using ovirt for the last year or two obvio=
usly
> > we are noobs only been
> > on board since ovirt 3.3=2C anyways i have the selfhosted version of =
ovirt
> > 3.4 running and its so much better
> > then our previous environment. I deleted a storage domain from my old
> > environment 3.3 and tried ot import
> > it into the new 3.4 but it says the connection is already being used ( =
and i
> > also removed the export domain i had originally
> > setup in ovirt 3.4)
> >=20
> > any way i could get the engine to forget that connection so i can impor=
t the
> > domain=2C
> >=20
> > my main issues is trying to get my vms from ovirt 3.3 into 3.4=2C haven=
't had
> > much success (if you can point me to the
> > right direction )
> >=20
> >=20
> > Thanks=2C
> > =20
> >=20
> >=20
> > Shawn O'Connor
> >=20
> >=20
> >=20
> > zimax networks
> >=20
> > Chief Technology Officer
> >=20
> >=20
> >=20
> > 1 818.643.99511 818.643.9951 | Email: soconnor(a)zimax.net
> >=20
> >=20
> > 650 south grand avenue=2C ste 119
> >=20
> >=20
> > Los Angeles=2C ca 90017 | USA
> >=20
> >=20
> >=20
> > CallSend SMSAdd to SkypeYou'll need Skype CreditFree via SkypeCallSend =
SMSAdd
> > to SkypeYou'll need Skype CreditFree via Skype
=
--_d6ec6831-b037-42a6-844f-a580f63d10c2_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><br><div>>=3B >=3B Hello=2C<=
br>>=3B >=3B <br>>=3B >=3B We have recently started using ovirt=
for the last year or two obviously<br>>=3B >=3B we are noobs only =
been<br>>=3B >=3B on board since ovirt 3.3=2C anyways i have the self=
hosted version of ovirt<br>>=3B >=3B 3.4 running and its so much bett=
er<br>>=3B >=3B then our previous environment. I deleted a storage dom=
ain from my old<br>>=3B >=3B environment 3.3 and tried ot import<br>&g=
t=3B >=3B it into the new 3.4 but it says the connection is already being=
used ( and i<br>>=3B >=3B also removed the export domain i had origina=
lly<br>>=3B >=3B setup in ovirt 3.4)<br>>=3B >=3B <br>>=3B >=
=3B any way i could get the engine to forget that connection so i can impor=
t the<br>>=3B >=3B domain=2C<br>>=3B >=3B <br>>=3B >=3B my main=
issues is trying to get my vms from ovirt 3.3 into 3.4=2C haven't had<br>&=
gt=3B >=3B much success (if you can point me to the<br>>=3B >=3B righ=
t direction )<br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=3B Thanks=
=2C<br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=
=3B Shawn O'Connor<br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=3B <br=
>>=3B >=3B zimax networks<br>>=3B >=3B <br>>=3B >=3B Chief Tech=
nology Officer<br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=3B <br>>=
=3B >=3B 1 818.643.99511 818.643.9951 | Email: soconnor(a)zimax.net<br>>=
=3B >=3B <br>>=3B >=3B <br>>=3B >=3B 650 south grand avenue=2C st=
e 119<br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=3B Los Angeles=2C c=
a 90017 | USA<br>>=3B >=3B <br>>=3B >=3B <br>>=3B >=3B <br>>=
=3B >=3B CallSend SMSAdd to SkypeYou'll need Skype CreditFree via SkypeCa=
llSend SMSAdd<br>>=3B >=3B to SkypeYou'll need Skype CreditFree via Sky=
pe<br></div> </div></body>
</html>=
--_d6ec6831-b037-42a6-844f-a580f63d10c2_--
10 years, 5 months
Hacking in Ceph rather then Gluster.
by Nathan Stratton
So I understand that the news is still fresh and there may not be much
going on yet in making Ceph work with ovirt, but I thought I would reach
out and see if it was possible to hack them together and still use librdb
rather then NFS.
I know, why not just use Gluster... the problem is I have tried to use
Gluster for VM storage for years and I still don't think it is ready. Ceph
still has work in other areas, but this is one area where I think it
shines. This is a new lab cluster and I would like to try to use ceph over
gluster if possible.
Unless I am missing something, can anyone tell me they are happy with
Gluster as a backend image store? This will be a small 16 node 10 gig
cluster of shared compute / storage (yes I know people want to keep them
separate).
><>
nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580 |
www.broadsoft.com
10 years, 5 months
[ANN] oVirt 3.4.2 Release is now available
by Sandro Bonazzola
The oVirt development team is pleased to announce the general
availability of oVirt 3.4.2 as of Jun 10th 2014. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.
oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
The existing repository ovirt-3.4 has been updated for delivering this
release without the need of enabling any other repository, however since we
introduced package signing you need an additional step in order to get
the public keys installed on your system if you're upgrading from an older release.
Please refer to release notes [1] for Installation / Upgrade instructions.
Please note that mirrors will need a couple of days before being synchronized.
If you want to be sure to use latest rpms and don't want to wait for the mirrors,
you can edit /etc/yum.repos.d/ovirt-3.4.repo commenting the mirror line and
removing the comment on baseurl line.
A new oVirt Node and oVirt Live ISO will be available soon[2].
[1] http://www.ovirt.org/OVirt_3.4.2_Release_Notes
[2] http://resources.ovirt.org/plain/pub/ovirt-3.4/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months