[Users] Disk move when VM is running problem
by Gianluca Cecchi
Hello,
oVirt 3.2.1 on f18 host
I have a running VM with one disk.
Storage domain is FC and disk is thin provisioned.
VM has preexisting snapshots.
I select to move the disk
On the top part of the window I note in red this message:
Note: Moving the disk(s) while the VM is running
Donna if it is a generale advice...
I proceed and I get
Snapshot Auto-generated for Live Storage Migration creation for VM
zensrv was initiated by admin@internal.
Snapshot Auto-generated for Live Storage Migration creation for VM
zensrv has been completed.
User admin@internal moving disk zensrv_Disk1 to domain DS6800_Z1_1181.
during the move:
Total DISK READ: 73398.57 K/s | Total DISK WRITE: 79927.24 K/s
PID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
29513 idle vdsm 72400.02 K/s 72535.35 K/s 0.00 % 87.65 % dd
if=/rhev/data-cent~nt=10240 oflag=direct
2457 be/4 sanlock 336.35 K/s 0.33 K/s 0.00 % 0.12 % sanlock
daemon -U sanlock -G sanlock
4173 be/3 vdsm 0.33 K/s 0.00 K/s 0.00 % 0.00 % python
/usr/share/vdsm~eFileHandler.pyc 43 40
8760 be/4 qemu 0.00 K/s 7574.52 K/s 0.00 % 0.00 % qemu-kvm
-name F18 -S ~on0,bus=pci.0,addr=0x8
2830 be/4 root 0.00 K/s 13.14 K/s 0.00 % 0.00 % libvirtd --listen
27445 be/4 qemu 0.00 K/s 44.67 K/s 0.00 % 0.00 % qemu-kvm
-name zensrv ~on0,bus=pci.0,addr=0x6
3141 be/3 vdsm 0.00 K/s 3.94 K/s 0.00 % 0.00 % python
/usr/share/vdsm/vdsm
vdsm 29513 3141 14 17:14 ? 00:00:17 /usr/bin/dd
if=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/013bcc40-5f3d-4394-bd3b-971b14852654/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90
of=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90
bs=1048576 seek=0 skip=0 conv=notrunc count=10240 oflag=direct
After some minutes I get:
User admin@internal have failed to move disk zensrv_Disk1 to domain
DS6800_Z1_1181.
But actually the vm is still running (I had a ssh terminal open on it)
and the disk appears to be the target one, as if the move operation
actually completed ok....
what to do? Can I safely shutdown and restart the vm?
In engine.log around the error time I see this that lets me suspect
the problem is deallocating the old pv perhaps?
2013-04-16 17:17:45,039 WARN
[org.ovirt.engine.core.bll.GetConfigurationValueQuery]
(ajp--127.0.0.1-8702-2) calling GetConfigurationValueQuery
(VdcVersion) with null version, using default general for version
2013-04-16 17:18:02,979 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-50) [4e37d247] Failed in DeleteImageGroupVDS method
2013-04-16 17:18:02,980 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(pool-3-thread-50) [4e37d247] Error code CannotRemoveLogicalVolume and
error message IRSGenericException: IRSErrorException: Failed to
DeleteImageGroupVDS, error = Cannot remove Logical Volume:
('013bcc40-5f3d-4394-bd3b-971b14852654',
"{'d477fcba-2110-403e-93fe-15565aae5304':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'),
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'),
'2be37d02-b44f-4823-bf26-054d1a1f0c90':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='00000000-0000-0000-0000-000000000000'),
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}")
2013-04-16 17:18:03,029 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(pool-3-thread-50) [4e37d247] IrsBroker::Failed::DeleteImageGroupVDS
due to: IRSErrorException: IRSGenericException: IRSErrorException:
Failed to DeleteImageGroupVDS, error = Cannot remove Logical Volume:
('013bcc40-5f3d-4394-bd3b-971b14852654',
"{'d477fcba-2110-403e-93fe-15565aae5304':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'),
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'),
'2be37d02-b44f-4823-bf26-054d1a1f0c90':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='00000000-0000-0000-0000-000000000000'),
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}")
2013-04-16 17:18:03,067 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(pool-3-thread-50) [4e37d247] FINISH, DeleteImageGroupVDSCommand, log
id: 54616b28
2013-04-16 17:18:03,067 ERROR
[org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand]
(pool-3-thread-50) [4e37d247] Command
org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
IRSGenericException: IRSErrorException: Failed to DeleteImageGroupVDS,
error = Cannot remove Logical Volume:
('013bcc40-5f3d-4394-bd3b-971b14852654',
"{'d477fcba-2110-403e-93fe-15565aae5304':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='0a6c7300-011a-46f9-9a5d-d22476a7f4c6'),
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='2be37d02-b44f-4823-bf26-054d1a1f0c90'),
'2be37d02-b44f-4823-bf26-054d1a1f0c90':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='00000000-0000-0000-0000-000000000000'),
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6':
ImgsPar(imgs=('01488698-6420-4a32-9095-cfed1ff8f4bf',),
parent='f8eb4d4c-9aae-44b8-9123-73f3182dc4dc')}")
2013-04-16 17:18:03,069 ERROR
[org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand]
(pool-3-thread-50) [4e37d247] Reverting task unknown, handler:
org.ovirt.engine.core.bll.lsm.VmReplicateDiskStartTaskHandler
2013-04-16 17:18:03,088 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(pool-3-thread-50) [4e37d247] START,
VmReplicateDiskFinishVDSCommand(HostName = f18ovn01, HostId =
0f799290-b29a-49e9-bc1e-85ba5605a535,
vmId=c0a43bef-7c9d-4170-bd9c-63497e61d3fc), log id: 6f8ba7ae
2013-04-16 17:18:03,093 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(pool-3-thread-50) [4e37d247] FINISH, VmReplicateDiskFinishVDSCommand,
log id: 6f8ba7ae
2013-04-16 17:18:03,095 ERROR
[org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand]
(pool-3-thread-50) [4e37d247] Reverting task deleteImage, handler:
org.ovirt.engine.core.bll.lsm.CreateImagePlaceholderTaskHandler
2013-04-16 17:18:03,113 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(pool-3-thread-50) [4e37d247] START, DeleteImageGroupVDSCommand(
storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3,
ignoreFailoverLimit = false, compatabilityVersion = null,
storageDomainId = 14b5167c-5883-4920-8236-e8905456b01f, imageGroupId =
01488698-6420-4a32-9095-cfed1ff8f4bf, postZeros = false, forceDelete =
false), log id: 515ac5ae
2013-04-16 17:18:03,694 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand]
(pool-3-thread-50) [4e37d247] FINISH, DeleteImageGroupVDSCommand, log
id: 515ac5ae
and in vdsm.log:
Thread-1929238::ERROR::2013-04-16
17:14:02,928::libvirtvm::2320::vm.Vm::(diskReplicateStart)
vmId=`c0a43bef-7c9d-4170-bd9c-63497e61d3fc`::Unable to start the
replication for vda to {'domainID':
'14b5167c-5883-4920-8236-e8905456b01f', 'poolID':
'5849b030-626e-47cb-ad90-3ce782d831b3', 'device': 'disk', 'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304',
'volumeID': 'd477fcba-2110-403e-93fe-15565aae5304', 'volumeChain':
[{'path': '/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/2be37d02-b44f-4823-bf26-054d1a1f0c90',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'2be37d02-b44f-4823-bf26-054d1a1f0c90', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/f8eb4d4c-9aae-44b8-9123-73f3182dc4dc',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'f8eb4d4c-9aae-44b8-9123-73f3182dc4dc', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/0a6c7300-011a-46f9-9a5d-d22476a7f4c6',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'0a6c7300-011a-46f9-9a5d-d22476a7f4c6', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}, {'path':
'/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/14b5167c-5883-4920-8236-e8905456b01f/images/01488698-6420-4a32-9095-cfed1ff8f4bf/d477fcba-2110-403e-93fe-15565aae5304',
'domainID': '14b5167c-5883-4920-8236-e8905456b01f', 'volumeID':
'd477fcba-2110-403e-93fe-15565aae5304', 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}], 'imageID':
'01488698-6420-4a32-9095-cfed1ff8f4bf'}
Traceback (most recent call last):
File "/usr/share/vdsm/libvirtvm.py", line 2316, in diskReplicateStart
libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW
File "/usr/share/vdsm/libvirtvm.py", line 541, in f
ret = attr(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 111, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 626, in blockRebase
if ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)
libvirtError: unsupported flags (0xb) in function qemuDomainBlockRebase
Thread-1929238::DEBUG::2013-04-16
17:14:02,930::task::568::TaskManager.Task::(_updateState)
Task=`27736fbc-7054-4df3-9f26-388601621ea9`::moving from state init ->
state preparing
Thread-1929238::INFO::2013-04-16
17:14:02,930::logUtils::41::dispatcher::(wrapper) Run and protect:
teardownImage(sdUUID='14b5167c-5883-4920-8236-e8905456b01f',
spUUID='5849b030-626e-47cb-ad90-3ce782d831b3',
imgUUID='01488698-6420-4a32-9095-cfed1ff8f4bf', volUUID=None)
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::190::ResourceManager.Request::(__init__)
ResName=`Storage.14b5167c-5883-4920-8236-e8905456b01f`ReqID=`e1aa534c-45b6-471f-9a98-8413d449b480`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '189' at
'__init__'
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::504::ResourceManager::(registerResource)
Trying to register resource
'Storage.14b5167c-5883-4920-8236-e8905456b01f' for lock type 'shared'
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::547::ResourceManager::(registerResource)
Resource 'Storage.14b5167c-5883-4920-8236-e8905456b01f' is free. Now
locking as 'shared' (1 active user)
Thread-1929238::DEBUG::2013-04-16
17:14:02,931::resourceManager::227::ResourceManager.Request::(grant)
ResName=`Storage.14b5167c-5883-4920-8236-e8905456b01f`ReqID=`e1aa534c-45b6-471f-9a98-8413d449b480`::Granted
request
Thread-1929238::DEBUG::2013-04-16
17:14:02,932::task::794::TaskManager.Task::(resourceAcquired)
Task=`27736fbc-7054-4df3-9f26-388601621ea9`::_resourcesAcquired:
Storage.14b5167c-5883-4920-8236-e8905456b01f (shared)
Thread-1929238::DEBUG::2013-04-16
17:14:02,932::task::957::TaskManager.Task::(_decref)
Task=`27736fbc-7054-4df3-9f26-388601621ea9`::ref 1 aborting False
Thread-1929238::DEBUG::2013-04-16
17:14:02,932::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm
reload operation' got the operation mutex
thanks,
Gianluca
11 years, 5 months
[Users] Starting VM gets paused
by Nicolas Ecarnot
Hi,
After two months of stable usage of this 3.1 oVirt setup, here comes the
first blocking issue for which I've no other mean to ask some hint.
When I'm starting a VM, the start process is running fine. Being fast
enough, we can ssh-connect to it, but 5 seconds later, the VM is paused.
In the manager, I see that :
2013-03-22 09:42:57,435 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerObjectsBuilder]
(QuartzScheduler_Worker-40) Error in parsing vm pause status. Setting
value to NONE
2013-03-22 09:42:57,436 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-40) VM serv-chk-adm3
3e17586d-bf8f-465b-8075-defaac90bc95 moved from PoweringUp --> Paused
And on the host, I see one warning message, no error msg, and many
looping repeated messages :
* Warning :
Thread-1968::WARNING::2013-03-22
09:19:18,536::libvirtvm::1547::vm.Vm::(_readPauseCode)
vmId=`3e17586d-bf8f-465b-8075-defaac90bc95`::_readPauseCode unsupported
by libvirt vm
* Repeated msgs, amongst other repeated ones :
Thread-1973::DEBUG::2013-03-22
09:19:20,247::libvirtvm::220::vm.Vm::(_getNetworkStats)
vmId=`3e17586d-bf8f-465b-8075-defaac90bc95`::Network stats not available
Thread-1973::DEBUG::2013-03-22
09:19:20,247::libvirtvm::240::vm.Vm::(_getDiskStats)
vmId=`3e17586d-bf8f-465b-8075-defaac90bc95`::Disk hdc stats not available
I made my homework and found some bugs that could be similar :
https://bugzilla.redhat.com/show_bug.cgi?id=660598
https://bugzilla.redhat.com/show_bug.cgi?id=672208
and moreover :
https://bugzilla.redhat.com/show_bug.cgi?id=695393
- I tried to restart the node's vds daemon : same behavior
- I tried to reboot the node : same behavior
- I tried to to restart the manager's engine : same behavior
- I tried to run this VM on another node : same behavior
- I tried to run another VM on the node I saw the issue : the other VM
is running fine.
I don't know if I have to conclude that this issue is specific to this
VM, but I sounds like yes.
Things to say about this VM :
- it it a RH6 IIRC. It has already been successfully started, migrated,
stopped and rebooted many times in the past.
- it has 3 disks : one for the system and two for datas.
- it has no snapshots
- it has no different or complicated network setup
My storage domain is a SAN, iSCSI linked, and doing good job since months.
I must admit I'm a bit stuck. Last thing I haven't tried is to reboot
the manager, though I'm not sure that would help.
--
Nicolas Ecarnot
11 years, 5 months
Re: [Users] Fwd: Nagios monitoring plugin check_rhev3 1.2 released
by Itamar Heim
> -------- Original Message --------
> Subject: [Users] Nagios monitoring plugin check_rhev3 1.2 released
> Date: Thu, 16 May 2013 16:31:24 +0200
> From: René Koch <r.koch(a)ovido.at>
> To: users <Users(a)ovirt.org>
>
> I'm happy to announce version 1.2 of check_rhev3.
>
> check_rhev3 is a monitoring plugin for Icinga/Nagios and it's forks,
> which is
> used to monitor datacenters, clusters, hosts, vms, vm pools and storage
> domains
> of Red Hat Enterprise Virtualization (RHEV) and oVirt virtualization
> environments.
>
> The download locations are
> * https://labs.ovido.at/download/check_rhev3/check_rhev3-1.2.tar.gz
> *
> https://labs.ovido.at/download/check_rhev3/nagios-plugins-rhev3-1.2-1.el6...
>
> *
> https://labs.ovido.at/download/check_rhev3/nagios-plugins-rhev3-1.2-1.el6...
>
>
> For further information on how to install this plugin visit:
> https://github.com/ovido/check_rhev3/wiki/Installation-Documentation
>
> A detailed usage documentation can be found here:
> https://github.com/ovido/check_rhev3/wiki/Usage-Documentation
>
>
> Changelog:
>
> - General:
> - Moved project to github: https://github.com/ovido/check_rhev3
>
> - New features:
> - Verify RHEV-M certificate
> - Allow authentication sessions for authentication in RHEV >= 3.1 and
> oVirt >= 3.1
> - Use option -n <nic> to check a specific nic
>
> - Bugs fixed:
> - Performance data issue with check_multi
>
>
> If you have any questions or ideas, please drop me an email:
> r.koch(a)ovido.at.
>
> Thank you for using check_rhev3.
>
>
>
Hi Rene,
we deployed the plugin and noticed its flooding the event log with login
events for the user its using via the REST API.
can you please add persistent session to the REST API calls so login
will happen only once and won't flood the log?
http://www.ovirt.org/Features/RESTSessionManagement
Thanks,
Itamar
Thanks,
Itamar
11 years, 5 months
[Users] deduplication
by suporte@logicworks.pt
------=_Part_35302_1064706683.1369490560788
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
is deduplication possible?
Regards
Jose
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_35302_1064706683.1369490560788
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: Times New Roman; font-size: 10pt; color: #000000"><div>is deduplication possible?<br></div><div><br></div><div>Regards<br></div><div>Jose<br></div><div><br></div><div>-- <br></div><div><span name="x"></span><hr style="width: 100%; height: 2px;" data-mce-style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name="x"></span><br></div></div></body></html>
------=_Part_35302_1064706683.1369490560788--
11 years, 5 months
[Users] Openstack Quantum , Cinder and Glance integration with oVirt.
by Romil Gupta
Hi ,
It would be great to know about the Quantum integration with oVirt and
eagerly waiting for its release date.
If possible, Can anyone please share the references( demo video , ppt ,link
or git ) for Cinder and Glance integration.
*
Regards,
*
*Romil Gupta
*
11 years, 5 months
[Users] Disk quota and templates bug?
by Jure Kranjc
Hi,
we've encountered an quota allocation problem which seems like a bug.
Using engine 3.2.2. on CentOS, datacenter in enforced quota mode. Scenario:
- Create a virtual machine, seal it and create template from it. Assign
some quota to it.
- Create a new user, set new quota limits to his username
- This user creates a new VM from this template. In new server/desktop
dialog, resource allocation, new disk gets set to user's quota (user
only has permission for it's own quota). Create VM.
- When VM is created it inherits the templates quota and not user's, as
it should. So user is using templates disk quota. Quota for memory and
vcpu works ok.
No errors in engine.log.
11 years, 5 months
[Users] iSCSI and snapshots
by Juan Pablo Lorier
Hi,
I'm trying to understand how ovirt manages snapshots to start using them
for vm backups. I have my vms to use a iscsi LUN for the system (each
has its own) and I have no extra space to create an LVM snapshot in the
VM disk.
When I hit "create" in the vm snapshot tab, it creates the snapshot and
reports ok, but I see no info in the right pannel (disk, ram,etc) and
nothing is created in the data storage of the DC, so I don't get where
is ovirt storing the snapshot data.
Do I have to create an export storage domain?.
Regards,
Juan Pablo
11 years, 5 months
[Users] Fedora 18 and usb pass-through
by Ryan Wilkinson
I've noticed with Fedora 17 that usb devices plugged in after a spice
session is initiated automagically pass-through to the virtual desktop
without manually selecting them but not so with Fedora 18. They pass
through if you manually select them from the drop down bar. Any way to
make 18 work like 17??
11 years, 5 months
[Users] mozilla-xpi for Ubuntu
by Mario Giammarco
Hello,
I need a working mozilla-xpi for Ubuntu 12.04 (and soon 12.10).
It is strange that an opensource project as ovirt is only working on Fedora.
Thanks in advance for any help.
Mario
11 years, 5 months