I have a machine with 4.7, but the machine keeps on reporting the following:
" No space left on device" whenever I try to execute anything.
I checked the diskspace which looks like:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda1 5.0G 3.4G 1.4G 73% /
none 92M 0 92M 0% /dev/shm
/dev/hda3 67G 5.5G 58G 9% /vz
I found a similar problem posted here: http://www.linuxquestions.org/questions/linux-general-1/no-space-left-on-device-but-df-shows-free-space-162202/ but it says that the problem is due to more than 1000 files in a directory, but does not specify which directory?
I tried to run 'yum install bind-chroot' but got an error message like this:
# yum install bind-chroot
Traceback (most recent call last):
File "/usr/bin/yum", line 29, in ?
yummain.main(sys.argv[1:])
File "/usr/share/yum-cli/yummain.py", line 86, in main
base.doLock(YUM_PID_FILE)
File "__init__.py", line 447, in doLock
File "__init__.py", line 487, in _lock
OSError: [Errno 28] No space left on device: '/var/run/yum.pid'
Couldn't find anything related to the matter. Any clues?
Enough Space there, but reports 'no space left on device' error
-
- Posts: 10642
- Joined: 2005/08/05 15:19:54
- Location: Northern Illinois, USA
Enough Space there, but reports 'no space left on device' er
You appear to be running OpenVZ.
If that is the case, this is not a CentOS issue, and
I suggest you check the OpenVZ fora for disk quota issues.
If that is the case, this is not a CentOS issue, and
I suggest you check the OpenVZ fora for disk quota issues.
Re: Enough Space there, but reports 'no space left on device' error
I am just running openvz kernel installed on top of centos 4.7.
None of the vz's are running. It was working alright earlier with openvz but it encountered problem recently.
# cat /proc/user_beancounters
Version: 2.5
uid resource held maxheld barrier limit failcnt
0: kmemsize 2702643 3208335 2147483647 2147483647 0
lockedpages 0 0 2147483647 2147483647 0
privvmpages 14181 20416 2147483647 2147483647 0
shmpages 788 1460 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
numproc 48 58 2147483647 2147483647 0
physpages 5455 7601 2147483647 2147483647 0
vmguarpages 0 0 2147483647 2147483647 0
oomguarpages 5603 7747 2147483647 2147483647 0
numtcpsock 4 8 2147483647 2147483647 0
numflock 2 4 2147483647 2147483647 0
numpty 1 1 2147483647 2147483647 0
numsiginfo 0 2 2147483647 2147483647 0
tcpsndbuf 49192 49192 2147483647 2147483647 0
tcprcvbuf 65536 49152 2147483647 2147483647 0
othersockbuf 12972 23472 2147483647 2147483647 0
dgramrcvbuf 0 8380 2147483647 2147483647 0
numothersock 17 28 2147483647 2147483647 0
dcachesize 0 0 2147483647 2147483647 0
numfile 1893 2063 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
numiptent 340 340 2147483647 2147483647 0
None of the vz's are running. It was working alright earlier with openvz but it encountered problem recently.
# cat /proc/user_beancounters
Version: 2.5
uid resource held maxheld barrier limit failcnt
0: kmemsize 2702643 3208335 2147483647 2147483647 0
lockedpages 0 0 2147483647 2147483647 0
privvmpages 14181 20416 2147483647 2147483647 0
shmpages 788 1460 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
numproc 48 58 2147483647 2147483647 0
physpages 5455 7601 2147483647 2147483647 0
vmguarpages 0 0 2147483647 2147483647 0
oomguarpages 5603 7747 2147483647 2147483647 0
numtcpsock 4 8 2147483647 2147483647 0
numflock 2 4 2147483647 2147483647 0
numpty 1 1 2147483647 2147483647 0
numsiginfo 0 2 2147483647 2147483647 0
tcpsndbuf 49192 49192 2147483647 2147483647 0
tcprcvbuf 65536 49152 2147483647 2147483647 0
othersockbuf 12972 23472 2147483647 2147483647 0
dgramrcvbuf 0 8380 2147483647 2147483647 0
numothersock 17 28 2147483647 2147483647 0
dcachesize 0 0 2147483647 2147483647 0
numfile 1893 2063 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
dummy 0 0 2147483647 2147483647 0
numiptent 340 340 2147483647 2147483647 0
-
- Retired Moderator
- Posts: 18276
- Joined: 2006/12/13 20:15:34
- Location: Tidewater, Virginia, North America
- Contact:
Re: Enough Space there, but reports 'no space left on device' error
If you are running the OpenVZ kernel, you are not really running CentOS. All bets are off. Try the CentOS kernel and see if the behavior changes. If not, do you have other non-CentOS packages installed?
Re: Enough Space there, but reports 'no space left on device' error
Is there enough inodes on this fs?
Hint: df -i
Hint: df -i
-
- Posts: 2
- Joined: 2010/08/01 03:11:14
Re: Enough Space there, but reports 'no space left on device' error
[quote]
Buggers wrote:
Is there enough inodes on this fs?
Hint: df -i[/quote]
Yes, that is right! You must check if you have free inodes, this was my problem.
Buggers wrote:
Is there enough inodes on this fs?
Hint: df -i[/quote]
Yes, that is right! You must check if you have free inodes, this was my problem.
-
- Posts: 2
- Joined: 2010/08/01 03:11:14
Re: Enough Space there, but reports 'no space left on device' error
[b]I've got the solution.[/b]
Just to explain my problem, I had a disk with 60% of disk space usage and 100% inode usage. These info I've discovered using df commands.
When you have something like what I have, there is a very good chance that some folder inside your computer/server is full of little files (less than 1k), but in an huge number.
I had one of these directories.
Beginning in /, I used du -h --max-depth=1 and found out a folder with 20 million little files, total of 1.7gb.
I deleted them all and, after reboot, system was ok again!
Just to explain my problem, I had a disk with 60% of disk space usage and 100% inode usage. These info I've discovered using df commands.
When you have something like what I have, there is a very good chance that some folder inside your computer/server is full of little files (less than 1k), but in an huge number.
I had one of these directories.
Beginning in /, I used du -h --max-depth=1 and found out a folder with 20 million little files, total of 1.7gb.
I deleted them all and, after reboot, system was ok again!