----- Original Message -----
looking at the slave it has 12G free:
[eedri@fc20-vm06 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda3 18G 4.9G 12G 30% /
devtmpfs 4.4G 0 4.4G 0% /dev
tmpfs 4.4G 0 4.4G 0% /dev/shm
tmpfs 4.4G 368K 4.4G 1% /run
tmpfs 4.4G 0 4.4G 0% /sys/fs/cgroup
tmpfs 4.4G 424K 4.4G 1% /tmp
/dev/vda1 93M 71M 16M 83% /boot
can't know which slave it used since 66 build is no longer there.
I can tell that this happens with seevral slaves currently.
In the last 10 days or so we had only a few successfull builds.
Do we have a label for slaves with big disks?
- fabian
e.
----- Original Message -----
> From: "Sandro Bonazzola" <sbonazzo(a)redhat.com>
> To: "Fabian Deutsch" <fabiand(a)redhat.com>, "infra"
<infra(a)ovirt.org>
> Sent: Tuesday, June 16, 2015 11:43:26 AM
> Subject: [ticket] not enough disk space on slaves for building node
> appliance
>
>
http://jenkins.ovirt.org/job/ovirt-appliance-engine_3.5_merged/66/console
>
> 15:05:33 Max needed: 9.8G. Free: 9.0G. May need another 748.2M.
>
>
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at
redhat.com
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/infra
>
>
>
--
Eyal Edri
Supervisor, RHEV CI
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)