Reading the docs all day long helped me to setup a nice Data center in
iSCSI mode, connected to a big LUN in a san.
Many many points are working, mostly thanks to you, people of this list.
Apart from this oVirt setup (1 manager, 3 nodes, 1 san), I have a
completely separated ubuntu hypervisor running a standalone local KVM
with a local storage.
I have not a clear view of how I will manage to import these VM into oVirt.
Of course, I've read about ovirt-v2v (and its huge amount of
dependencies...), but I'm not sure this is the way I have to follow?
As far as I've understood, v2v seems to be dedicated to connection
between oVirt datacenters, or vmware or Xen plateforms, but I see
nothing about the connection to a distant standalone KVM hypervisor.
One more thing that is unclear to me, and seems related, is the notion
of export / import domain. I read that this principle could allow me to
export (then to backup) my VMs. This could help me to import some VMs.
But I read that this export domain has to be the same type of my
datacenter (iscsi), so this is not helping me with my standalone kvm
I'd be glad to get some light on these points.
Hello, I have a test environment where I am short of hw.
My engine is based on fedora 18 and I would like to test freeipa on centos
What are the drawbacks of creating a vm that is also one authentication
domain for ovirt itself (apart the obvious ones)?
Can i set it so that is the first to start?
Can i set a subset of VMs to start only if this one is up and is giving ipa
service (for example through a test connect command? )
Thanks for your so detailed report. But unfortunately I can't find any
clue related to
this network interface status problem. Sorry for that. Maybe other guy
can add more comments.
At this moment, I suspect the link of em1.311 and probably its
underlying interface em1 went down occasionally and came up in a
short time. Is it possible? You could check the system log
'/var/log/messages' to find related messages.
> Thanks for your input Mark.
> At the moment all is well again with the host; anyway I'm going to
> give a long explanation of what I did below and so possible reasons
> for that situation.
> There is also feed for developers answers in that ...
> I cannot connect at the moment to the host to cut and paste in text
> but I have access to java console and here is a screenshot containing
> the line you requested (captured now that all is ok and host up and
> runnign with two VMs):
> Which fields are expected to be indicators of problems, as I had yesterday?
You can check the 'state' filed in each interface.
> The story:
> oVirt is 3.2.1
> - create a datacenter of type local_on_host
> - create a cluster
> - the ovirtmgmt is intended to be vlan tagged but I forget it... damn...
> - I add a host (f18) where I prepared network configured with vlan
> - the host deploy completes successfully --> here in my opinion a
> possible bug because ovirtmgmt is not tagged?
> - host reboots and apparently all is ok
> - I can create local_on_host SD and ISO
> - all is up and running without any VM yet
> The host is a blade inside an IMS enclosure and I discover that
> actually I can configure the storage assigned (second disk of the
> blade) as FC and that there is also multipath for this kind of servers
> (see the other thread of mine).
> The optimal would be to deactivate all and redefine the DC as
> Fibrechannel type but it seems not so easy....
> I probably make an error at this point because instead of directly
> putting host into maintenance I follow these steps in the gui:
> - expand my DC at the left
> - expand STORAGE of that DC on the left
> - select my storage domain (LSTORAGE) on the left
> On the right pane I select the line of LSTORAGE and in the bottom
> pane I select the "datacenter" label where I then select the line of
> my DC.
> I choose "maintenance"
> --> what is it supposed I have done this way?
> Put storage domain in maintenance or DC or what?
> Is it supposed to be a correct operation what I've done if the DC is
> Up and this is the only SD I have?
> Or would it be correct to receive an error from the system?
> Because I don't receive any error and I see the SD down and host that
> keeps staying with the status "Preparing to maintenance"
> I don't remember correctly my steps here, possibly restart of host
> that is recognized then as unassigned.
> After that I cannot "force remove" the DC and I cannot remove the
> cluster or host because it is not in maintenance.
> After I also reboot the engine service and restart the host, it comes
> up normally with all the DC with the old config.
> At this point I can put host in maintenance and force remove the DC
> I create a new DC with same name but with type FC and ovirtmgmt as
> vlan tagged and I see that the existing cluster is still present, so I
> attach it to the new DC (that has the same name as before).
> I also find that there is my host that seems to automatically be in
> this cluster.
> But at this point when I activate it, it comes up but after a few
> minutes I get the message I posted at the beginning
> In host details, its network interfaces keep down.
> Tried different restarts of host.
> So I decide to put host into maintenance and select into the gui to
> "reinstall" it.
> After this all goes well again..
> Sorry for the long story
> Users mailing list
On Tue, Apr 9, 2013 at 9:10 PM, Joop wrote:
> For the first part I don't have an answer but the softlockups sound familiar
> and the only solution I have found sofar that works reliable is to switch
> back to kernel 3.6.10
> I higher kernel level will sometimes work after a reboot but more often it
> will cause softlockups as soon as multipathd is started.
is there any bugzilla tracked for this?
At my part I openend
because with some servers I have also the problem that I'm put into a
dracut shell with kernel higher than stock f18 initial one.
Are there other ones?
Anyone already tried latest kernels such as 3.8.5-201.fc18.x86_64?
What is the kernel shipped with ovirt-node iso? Can we use it
eventually on a standard fedora 18 system?
BTW: with my home pc where I have
model name : AMD Athlon(tm) II X4 630 Processor
and an all-in-one 3.2.1 installation over fedora 18, I have no
problems with 3.8.5-201.fc18.x86_64 kernel.
not a big deal, but could ovirt have its own logo and so map it to the
favicon on portals?
At the moment I see the well known Red Hat logo....
fedora has its own even if sponsored by Red Hat as I imagine oVirt is...
Just for knowledge, not debate...
PS: at the moment it seems the ovirt.org has no favicon, so it could
be the occasion to make both...
I am using Ovirt 3.2 on Fedora 18:
[wil@bufferoverflow ~]$ rpm -q vdsm
(the engine is built from sources).
I seem to have hit this bug:
in the following configuration:
Single host (no migrations)
Created a VM, installed an OS inside (Fedora18)
stopped the VM.
created template from it.
Created an additional VM from the template using thin provision.
Started the second VM.
in addition to the errors in the logs the storage domains (both data and
ISO) crashed, i.e went to "unknown" and "inactive" states respectively.
(see the attached engine.log)
I attached the VDSM and engine logs.
is there a way to work around this problem?
It happens repeatedly.
what is the status of support for spice console with ie9 (32bit on
win7 32bit in my case) and 3.2 final?
I have a windows xp vm but when I try to login to console from
win7+ie9 I don't get any activex request to install....
ie exact version:
If after a few minutes I click again on console button I get
Error: A Request to the Server failed with the following Status Code: 400
Tried with both admin and user portal.
I have the chance to work on Intel blades where I have option to confihure
shared disks between them at enclosure level.
Their controller is scsi.
How could I configure as shared storage in ovirt?
I presume neither Fc nor iscsi are possible. .. so what can i do in your
Thanks in advance