Out of sync, hosts network config differs from DC
by femi adegoke
Property: default route, host - true, DC - false
I have 4 nics.
bond0 = 2 x 10g
eno1 = ovirtmgmt
eno2 = for vm traffic.
eno2 says it's out of sync, hosts network config differs from DC,
default route: host - true, DC - false.
I have tried "sync all networks" but the message still remains
See attached.
Where can I look to fix the issue?
5 years, 11 months
oVirt Node on CentOS 7.5 and AMD EPYC Support
by Tobias Scheinert
Hi,
I am currently building a new virtualization cluster with oVirt, using
AMD EPYC processors (AMD EPYC 7351P). At the moment I'm running oVirt
Node Version 4.2.3 @ CentOS 7.4.1708.
We have the situation that the processor type is recognized as "AMD
Opteron G3". With this type of instruction set the VMs are not able to
do AES in hardware, this results in poor performance in our case.
I found some information that tells me that this problem should be
solved with CentOS 7.5
--> <https://access.redhat.com/errata/RHEA-2018:1488>
My actual questions:
- Are there any further information about the AMD EPYC support?
- Any information about an update of the oVirt node to CentOS 7.5?
Greeting Tobias
5 years, 11 months
FOSDEM‘19 Virtualization & IaaS Devroom CfP
by Piotr Kliczewski
We are excited to announce that the
call for proposals is now open for the Virtualization & IaaS devroom at the
upcoming FOSDEM 2019, to be hosted on February 2nd 2019.
This year will mark FOSDEM’s 19th anniversary as one of the longest-running
free and open source software developer events, attracting thousands of
developers and users from all over the world. FOSDEM will be held once
again in Brussels, Belgium, on February 2nd & 3rd, 2019.
This devroom is a collaborative effort, and is organized by dedicated folks
from projects such as OpenStack, Xen Project, oVirt, QEMU, KVM, and
Foreman. We would like to invite all those who are involved in these fields
to submit your proposals by December 1st, 2018.
About the Devroom
The Virtualization & IaaS devroom will feature session topics such as open
source hypervisors and virtual machine managers such as Xen Project, KVM,
bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as
KubeVirt,
Apache CloudStack, OpenStack, oVirt, QEMU and OpenNebula.
This devroom will host presentations that focus on topics of shared
interest, such as KVM; libvirt; shared storage; virtualized networking;
cloud security; clustering and high availability; interfacing with multiple
hypervisors; hyperconverged deployments; and scaling across hundreds or
thousands of servers.
Presentations in this devroom will be aimed at developers working on these
platforms who are looking to collaborate and improve shared infrastructure
or solve common problems. We seek topics that encourage dialog between
projects and continued work post-FOSDEM.
Important Dates
Submission deadline: 1 December 2019
Acceptance notifications: 14 December 2019
Final schedule announcement: 21 December 2019
Devroom: 2nd February 2019
Submit Your Proposal
All submissions must be made via the Pentabarf event planning site[1]. If
you have not used Pentabarf before, you will need to create an account. If
you submitted proposals for FOSDEM in previous years, you can use your
existing account.
After creating the account, select Create Event to start the submission
process. Make sure to select Virtualization and IaaS devroom from the Track
list. Please fill out all the required fields, and provide a meaningful
abstract and description of your proposed session.
Submission Guidelines
We expect more proposals than we can possibly accept, so it is vitally
important that you submit your proposal on or before the deadline. Late
submissions are unlikely to be considered.
All presentation slots are 30 minutes, with 20 minutes planned for
presentations, and 10 minutes for Q&A.
All presentations will be recorded and made available under Creative
Commons licenses. In the Submission notes field, please indicate that you
agree that your presentation will be licensed under the CC-By-SA-4.0 or
CC-By-4.0 license and that you agree to have your presentation recorded.
For example:
"If my presentation is accepted for FOSDEM, I hereby agree to license all
recordings, slides, and other associated materials under the Creative
Commons Attribution Share-Alike 4.0 International License. Sincerely,
<NAME>."
In the Submission notes field, please also confirm that if your talk is
accepted, you will be able to attend FOSDEM and deliver your presentation.
We will not consider proposals from prospective speakers who are unsure
whether they will be able to secure funds for travel and lodging to attend
FOSDEM. (Sadly, we are not able to offer travel funding for prospective
speakers.)
Speaker Mentoring Program
As a part of the rising efforts to grow our communities and encourage a
diverse and inclusive conference ecosystem, we're happy to announce that
we'll be offering mentoring for new speakers. Our mentors can help you with
tasks such as reviewing your abstract, reviewing your presentation outline
or slides, or practicing your talk with you.
You may apply to the mentoring program as a newcomer speaker if you:
Never presented before or
Presented only lightning talks or
Presented full-length talks at small meetups (<50 ppl)
Submission Guidelines
Mentored presentations will have 25-minute slots, where 20 minutes will
include the presentation and 5 minutes will be reserved for questions.
The number of newcomer session slots is limited, so we will probably not be
able to accept all applications.
You must submit your talk and abstract to apply for the mentoring program,
our mentors are volunteering their time and will happily provide feedback
but won't write your presentation for you!
If you are experiencing problems with Pentabarf, the proposal submission
interface, or have other questions, you can email our devroom mailing
list[2] and we will try to help you.
How to Apply
In addition to agreeing to video recording and confirming that you can
attend FOSDEM in case your session is accepted, please write "speaker
mentoring program application" in the "Submission notes" field, and list
any prior speaking experience or other relevant information for your
application.
Call for Mentors
Interested in mentoring newcomer speakers? We'd love to have your help!
Please email iaas-virt-devroom at lists.fosdem.org with a short speaker
biography and any specific fields of expertise (for example, KVM,
OpenStack, storage, etc.) so that we can match you with a newcomer speaker
from a similar field. Estimated time investment can be as low as a 5-10
hours in total, usually distributed weekly or bi-weekly.
Never mentored a newcomer speaker but interested to try? As the mentoring
program coordinator, email Brian Proffitt[3] and he will be happy to answer
your questions!
Code of Conduct
Following the release of the updated code of conduct for FOSDEM, we'd like
to remind all speakers and attendees that all of the presentations and
discussions in our devroom are held under the guidelines set in the CoC and
we expect attendees, speakers, and volunteers to follow the CoC at all
times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about the
CoC or wish to have one of the devroom organizers review your presentation
slides or any other content for CoC compliance, please email us and we will
do our best to assist you.
Call for Volunteers
We are also looking for volunteers to help run the devroom. We need
assistance watching time for the speakers, and helping with video for the
devroom. Please contact Brian Proffitt, for more information.
Questions?
If you have any questions about this devroom, please send your questions to
our devroom mailing list. You can also subscribe to the list to receive
updates about important dates, session announcements, and to connect with
other attendees.
See you all at FOSDEM!
[1] <https://penta.fosdem.org/submission/FOSDEM17>
https://penta.fosdem.org/submission/FOSDEM19
[2] iaas-virt-devroom at lists.fosdem.org
<https://lists.fosdem.org/listinfo/fosdem>
[3] bkp at redhat.com <https://lists.fosdem.org/listinfo/fosdem>
6 years
VM stuck in paused mode with Cluster Compatibility Version 3.6 on 4.2 cluster
by Marco Lorenzo Crociani
Hi,
we upgraded ovirt from version 4.1 to 4.2.6. Rebooted all vms.
We missed two vms that were at Cluster Compatibility Version 3.6.
There was a gluster/network IO problem and vms got paused. We were able
to recover all the other vms from the paused state but we have two vms
that won't run because:
"Cannot run VM. The Custom Compatibility Version of VM VM_NAME (3.6) is
not supported in Data Center compatibility version 4.1."
Can we force the CCV of the paused vm to 4.1?
Regards,
--
Marco Crociani
6 years
Metrics Store installation - ansible playbook "deploy_cluster" - docker_image_availability
by Markus Schaufler
Hi,
trying to install the metrics store following the updated install guide https://www.ovirt.org/documentation/metrics-install-guide/Setting_Up_Open...
Prerequisites and Network Check playbook run through; when executing
ANSIBLE_LOG_PATH=/tmp/ansible.log ansible-playbook -vvv -e @/root/vars.yaml -i /root/ansible-inventory-origin-39-aio playbooks/deploy_cluster.yml
#################################
CHECK [memory_availability : localhost] *******************************************************************************************************************************
fatal: [localhost]: FAILED! => {
"changed": false,
"checks": {
"disk_availability": {},
"docker_image_availability": {
"failed": true,
"failures": [
[
"OpenShiftCheckException",
"One or more required container images are not available:\n cockpit/kubernetes:latest,\n openshift/origin-deployer:v3.9.0,\n openshift/origin-docker-registry:v3.9.0,\n openshift/origin-haproxy-router:v3.9.0,\n openshift/origin-pod:v3.9.0\nChecked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>\nDefault registries searched: docker.io\nFailed connecting to: docker.io\n"
]
],
"msg": "One or more required container images are not available:\n cockpit/kubernetes:latest,\n openshift/origin-deployer:v3.9.0,\n openshift/origin-docker-registry:v3.9.0,\n openshift/origin-haproxy-router:v3.9.0,\n openshift/origin-pod:v3.9.0\nChecked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>\nDefault registries searched: docker.io\nFailed connecting to: docker.io\n"
},
"docker_storage": {},
"memory_availability": {},
"package_availability": {
"changed": false,
"invocation": {
"module_args": {
"packages": [
"PyYAML",
"bash-completion",
"bind",
"ceph-common",
"cockpit-bridge",
"cockpit-docker",
"cockpit-system",
"cockpit-ws",
"dnsmasq",
"docker",
"etcd",
"firewalld",
"flannel",
"glusterfs-fuse",
"httpd-tools",
"iptables",
"iptables-services",
"iscsi-initiator-utils",
"libselinux-python",
"nfs-utils",
"ntp",
"openssl",
"origin",
"origin-clients",
"origin-master",
"origin-node",
"origin-sdn-ovs",
"pyparted",
"python-httplib2",
"yum-utils"
]
}
}
},
"package_version": {
"changed": false,
"invocation": {
"module_args": {
"package_list": [
{
"check_multi": false,
"name": "openvswitch",
"version": [
"2.6",
"2.7",
"2.8",
"2.9"
]
},
{
"check_multi": false,
"name": "origin",
"version": "3.9"
},
{
"check_multi": false,
"name": "origin-master",
"version": "3.9"
},
{
"check_multi": false,
"name": "origin-node",
"version": "3.9"
}
],
"package_mgr": "yum"
}
}
}
},
"msg": "One or more checks failed",
"playbook_context": "install"
}
NO MORE HOSTS LEFT ****************************************************************************************************************************************************
to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.retry
PLAY RECAP ************************************************************************************************************************************************************
localhost : ok=53 changed=0 unreachable=0 failed=1
INSTALLER STATUS ******************************************************************************************************************************************************
Initialization : Complete (0:00:16)
Health Check : In Progress (0:00:44)
This phase can be restarted by running: playbooks/openshift-checks/pre-install.yml
Failure summary:
1. Hosts: localhost
Play: OpenShift Health Checks
Task: Run health checks (install) - EL
Message: One or more checks failed
Details: check "docker_image_availability":
One or more required container images are not available:
cockpit/kubernetes:latest,
openshift/origin-deployer:v3.9.0,
openshift/origin-docker-registry:v3.9.0,
openshift/origin-haproxy-router:v3.9.0,
openshift/origin-pod:v3.9.0
Checked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>
Default registries searched: docker.io
Failed connecting to: docker.io
The execution of "playbooks/deploy_cluster.yml" includes checks designed to fail early if the requirements of the playbook are not met. One or more of these checks failed. To disregard these results,explicitly disable checks by setting an Ansible variable:
openshift_disable_check=docker_image_availability
Failing check names are shown in the failure details above. Some checks may be configurable by variables if your requirements are different from the defaults; consult check documentation.
Variables can be set in the inventory or passed on the command line using the -e flag to ansible-playbook.
############################################
I'm using a proxy server - might that cause this problem? If so - where in the different (sub)playbooks would I have to set the environment variable?
Thanks for any hint on this!
6 years
vdsmd.service stuck in state: activating
by sceglimilano@gmail.com
= Noob questions =
A) First time here. I couldn't find the rulebook/netiquette for oVirt mailing
list, could you give me a link?
B) What do you use here, top or bottom posting?
C) Which is the line length limit? I've written this email manually entering
new line after 80 chars, is it fine?
D) Which is the log length limit?
E) When should I use pastebin or similar websites?
= /Noob questions =
= Issue questions =
TL;DR
self host engine deployment fails, vdsmd.service has this error I can't find on
Google:
'sysctl: cannot stat /proc/sys/ssl: No such file or directory'
Long description:
I've been trying for a while to deploy an oVirt self hosted engine but it failed
all the time. After every failed deployment, node status resulted in 'FAIL' (or
'DEGRADED', I don't remember) and to restore it to 'OK' status I'd to run
'vdsm-tool configure --force'.
vdsm-tool configuration never went through fully successful:
< code >
abrt is not configured for vdsm
lvm is configured for vdsm
libvirt is not configured for vdsm yet
FAILED: conflicting vdsm and libvirt-qemu tls configuration.
vdsm.conf with ssl=True requires the following changes:
libvirtd.conf: listen_tcp=0, auth_tcp="sasl", listen_tls=1
qemu.conf: spice_tls=1.
Current revision of multipath.conf detected, preserving
< /code >
I set 'auth_tcp = "sasl"' (in "/etc/libvirtd.conf") and 'spice_tls=1' (in
"/etc/libvirt/qemu.conf") and restart vdsdmd.service, but it still fails:
< code >
Starting Virtual Desktop Server Manager...
_init_common.sh[2839]: vdsm: Running mkdirs
_init_common.sh[2839]: vdsm: Running configure_coredump
_init_common.sh[2839]: vdsm: Running configure_vdsm_logs
_init_common.sh[2839]: vdsm: Running wait_for_network
_init_common.sh[2839]: vdsm: Running run_init_hooks
_init_common.sh[2839]: vdsm: Running check_is_configured
_init_common.sh[2839]: abrt is already configured for vdsm
_init_common.sh[2839]: lvm is configured for vdsm
_init_common.sh[2839]: libvirt is already configured for vdsm
_init_common.sh[2839]: Current revision of multipath.conf detected, preserving
_init_common.sh[2839]: vdsm: Running validate_configuration
_init_common.sh[2839]: SUCCESS: ssl configured to true. No conflicts
_init_common.sh[2839]: vdsm: Running prepare_transient_repository
_init_common.sh[2839]: vdsm: Running syslog_available
_init_common.sh[2839]: vdsm: Running nwfilter
_init_common.sh[2839]: vdsm: Running dummybr
_init_common.sh[2839]: vdsm: Running tune_system
_init_common.sh[2839]: sysctl: cannot stat /proc/sys/ssl: No such file or directory
md[1]: vdsmd.service: control process exited, code=exited status=1
md[1]: Failed to start Virtual Desktop Server Manager.
md[1]: Unit vdsmd.service entered failed state.
md[1]: vdsmd.service failed.
md[1]: vdsmd.service holdoff time over, scheduling restart.
< /code >
I've no idea about how to investigate, solve or troubleshoot this issue.
I've updated yesterday to the last version but nothing changed:
ovirt-release-master-4.3.0-0.1.master.20180820000052.gitdd598f0.el7.noarch
= /Issue questions =
6 years
Networking - How to pass all vlan (trunk) to a guest VM?
by Jonathan Greg
Hi,
I didn't find the documentation about it... How I can pass all the vlan (trunk) to a guest VM (without SR-IOV)?
My switch is configured to trunk all vlans.
I do have configured a logical network with the "Enable VLAN tagging" box uncheck and assign it to a VM. My understanding with this setting is that tagged and untagged packet should be forwarded to the VM guest.
Unfortunately it doesn't work at all, if I run a tcpdump on the VM nic attached to this logical network, I don't see anything... I would really like to run a virtual router or firewall on my setup without having to shutdown my VM and having attach a new vnic each time I want to add a new vlan.
Any idea?
Jonathan
6 years
Storage domain mount error: Lustre file system (Posix compliant FS)
by okok102928@fusiondata.co.kr
Hi.
I am an ovirt user in Korea. I am working on VDI. It's a pleasure to meet you, the ovirt specialists.
(I do not speak English well... Thank you for your understanding!)
I am testing Lustre File System in Ovirt or RH(E)V environment.
(The reason is simple: glusterfs and nfs have limit of performance, SAN Storage and excellent Software Defined Storage are quite expensive.)
Testing for file system performance was successful.
As expected, luster showed amazing performance.
However, there was an error adding luster storage to the storage domain as Posix compliant FS.
Domain Function : Data
Storage Type : POSIX compliant FS
Host to Use : [SPM_HOSTNAME]
Name : [STORAGE_DOMAIN_NAME]
Path : 10.10.10.15@tcp:/lustre/vmstore
VFS Type : lustre
Mount Options :
The vdsm debug logs are shown below.
2018-10-25 12:46:58,963+0900 INFO (jsonrpc/2) [storage.xlease] Formatting index for lockspace u'c0ef7ee6-1da9-4eef-9e03-387cd3a24445' (version=1) (xlease:653)
2018-10-25 12:46:58,971+0900 DEBUG (jsonrpc/2) [root] /usr/bin/dd iflag=fullblock of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases oflag=direct,seek_bytes seek=1048576 bs=256512 count=1 conv=notrunc,nocreat,fsync (cwd None) (commands:65)
2018-10-25 12:46:58,985+0900 DEBUG (jsonrpc/2) [root] FAILED: <err> = "/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n"; <rc> = 1 (commands:86)
2018-10-25 12:46:58,985+0900 INFO (jsonrpc/2) [vdsm.api] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n" from=::ffff:192.168.161.104,52188, flow_id=794bd395, task_id=c9847bf3-2267-483b-9099-f05a46981f7f (api:50)
2018-10-25 12:46:58,985+0900 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Unexpected error (task:875)
Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "<string>", line 2, in createStorageDomain
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 2591, in createStorageDomain
storageType, domVersion)
File "/usr/lib/python2.7/site-packages/vdsm/storage/nfsSD.py", line 87, in create
remotePath, storageType, version)
File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 465, in _prepareMetadata
cls.format_external_leases(sdUUID, xleases_path)
File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 1200, in format_external_leases
xlease.format_index(lockspace, backend)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 661, in format_index
index.dump(file)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 761, in dump
file.pwrite(INDEX_BASE, self._buf)
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 994, in pwrite
self._run(args, data=buf[:])
File "/usr/lib/python2.7/site-packages/vdsm/storage/xlease.py", line 1011, in _run
raise cmdutils.Error(args, rc, "[suppressed]", err)
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n"
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Task._run: c9847bf3-2267-483b-9099-f05a46981f7f (6, u'c0ef7ee6-1da9-4eef-9e03-387cd3a24445', u'vmstore', u'10.10.10.15@tcp:/lustre/vmstore', 1, u'4') {} failed - stopping task (task:894)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') stopping in state failed (force False) (task:1256)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') ref 1 aborting True (task:1002)
2018-10-25 12:46:58,986+0900 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') aborting: Task is aborted: u'Command [\'/usr/bin/dd\', \'iflag=fullblock\', u\'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases\', \'oflag=direct,seek_bytes\', \'seek=1048576\', \'bs=256512\', \'count=1\', \'conv=notrunc,nocreat,fsync\'] failed with rc=1 out=\'[suppressed]\' err="/usr/bin/dd: error writing \'/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases\': Invalid argument\\n1+0 records in\\n0+0 records out\\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\\n"' - code 100 (task:1181)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Prepare: aborted: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n" (task:1186)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') ref 0 aborting True (task:1002)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') Task._doAbort: force False (task:937)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.ResourceManager.Owner] Owner.cancelAll requests {} (resourceManager:947)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') moving from state failed -> state aborting (task:602)
2018-10-25 12:46:58,986+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') _aborting: recover policy none (task:557)
2018-10-25 12:46:58,987+0900 DEBUG (jsonrpc/2) [storage.TaskManager.Task] (Task='c9847bf3-2267-483b-9099-f05a46981f7f') moving from state failed -> state failed (task:602)
2018-10-25 12:46:58,987+0900 DEBUG (jsonrpc/2) [storage.ResourceManager.Owner] Owner.releaseAll requests {} resources {} (resourceManager:910)
2018-10-25 12:46:58,987+0900 DEBUG (jsonrpc/2) [storage.ResourceManager.Owner] Owner.cancelAll requests {} (resourceManager:947)
2018-10-25 12:46:58,987+0900 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH createStorageDomain error=Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n" (dispatcher:86)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/dispatcher.py", line 73, in wrapper
result = ctask.prepare(func, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 108, in wrapper
return m(self, *a, **kw)
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 1189, in prepare
raise self.error
Error: Command ['/usr/bin/dd', 'iflag=fullblock', u'of=/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases', 'oflag=direct,seek_bytes', 'seek=1048576', 'bs=256512', 'count=1', 'conv=notrunc,nocreat,fsync'] failed with rc=1 out='[suppressed]' err="/usr/bin/dd: error writing '/rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore/c0ef7ee6-1da9-4eef-9e03-387cd3a24445/dom_md/xleases': Invalid argument\n1+0 records in\n0+0 records out\n0 bytes (0 B) copied, 0.000943896 s, 0.0 kB/s\n"
2018-10-25 12:46:58,987+0900 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.create failed (error 351) in 0.41 seconds (__init__:573)
2018-10-25 12:46:59,058+0900 DEBUG (jsonrpc/3) [jsonrpc.JsonRpcServer] Calling 'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams': [{u'id': u'316c5f1f-753e-42a1-8e30-4ee6f976906a', u'connection': u'10.10.10.15@tcp:/lustre/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'lustre', u'password': '********', u'port': u''}], u'storagepoolID': u'00000000-0000-0000-0000-000000000000', u'domainType': 6} (__init__:590)
2018-10-25 12:46:59,058+0900 WARN (jsonrpc/3) [devel] Provided value "6" not defined in StorageDomainType enum for StoragePool.disconnectStorageServer (vdsmapi:275)
2018-10-25 12:46:59,058+0900 WARN (jsonrpc/3) [devel] Provided parameters {u'id': u'316c5f1f-753e-42a1-8e30-4ee6f976906a', u'connection': u'10.10.10.15@tcp:/lustre/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'lustre', u'password': '********', u'port': u''} do not match any of union ConnectionRefParameters values (vdsmapi:275)
2018-10-25 12:46:59,059+0900 DEBUG (jsonrpc/3) [storage.TaskManager.Task] (Task='3c6b249f-a47f-47f1-a647-5893b6f60b7c') moving from state preparing -> state preparing (task:602)
2018-10-25 12:46:59,059+0900 INFO (jsonrpc/3) [vdsm.api] START disconnectStorageServer(domType=6, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'316c5f1f-753e-42a1-8e30-4ee6f976906a', u'connection': u'10.10.10.15@tcp:/lustre/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'vfs_type': u'lustre', u'password': '********', u'port': u''}], options=None) from=::ffff:192.168.161.104,52188, flow_id=f1bf4bf8-9033-42af-9329-69960638ba0e, task_id=3c6b249f-a47f-47f1-a647-5893b6f60b7c (api:46)
2018-10-25 12:46:59,059+0900 INFO (jsonrpc/3) [storage.Mount] unmounting /rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore (mount:212)
2018-10-25 12:46:59,094+0900 DEBUG (jsonrpc/3) [storage.Mount] /rhev/data-center/mnt/10.10.10.15@tcp:_lustre_vmstore unmounted: 0.03 seconds (utils:452)
1. If you use the direct flag when using the dd command, luster only works in multiples of 4k (4096).
Therefore, bs = 256512, which is not a multiple of 4096, causes an error.
2. Ovirt / RH(E)V An error occurs regardless of version. I have tested both in 3.6, 4.1, and 4.2 environments.
I searched hard, but I did not have a similar case. I want to ask three questions.
1. Is there a way to fix the problem in Ovirt? (Secure bypass or some configurations)
2. (Extension of Question # 1) Where does block size 256512 come from? Why is it 256512?
3. Is this a problem you need to solve in the Lustre file system? (For example, a setting capable of direct IO in units of 512 bytes)
I need help. Thank you for your reply.
6 years
Involuntary emergency upgrade from 4.1 to 4.2
by djohnson@maxistechnology.com
I'm looking for a hand in recovering my ovirt cluster from a hardware failure. The hard drive on my cluster controller failed, and I would like to recover from backup.
The problem is, the cluster was 4.1, which is less than a year old, but was nevertheless removed from the active repositories back in May. 4.2 will not recover from 4.1 backups.
The storage domains are all intact (I think), and the hosts are still running (unmanaged). I've tried to manually restore the engine from backups, but either the upgrade is reinitializing or I am missing something.
Any ideas?
6 years