Bash fork重试:资源暂时不可用。

时间:2022-09-22 13:36:22

I have a simple BASH program that is doing this :

我有一个简单的BASH程序:

max_proc_snmpget=30
getHostnamesFromRouter() {
    while [ `find $proc_dir -name snmpgetproc* | wc -l` -ge "$max_proc_snmpget" ];do
    {
        echo "sleeping, fping in progress";
        sleep 1;
    }
    done

    temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
    while read ip codesite;do
    {
        sendSNMPGET $ip $snmp_community $code_site &
    }
    done<<<"$temp_ip"
}

sendSNMPGET() {
    touch $procdir/snmpgetproc.$$
    hostname=`snmpget -v1 -c $2 $1 sysName.0`
    if [ "$hostname" != "" ]
    then
        #$mysql -h $db_address -u $db_user -p$db_passwd $db_name -e "insert into $db_routeur_table (hostname,code_site,ip) VALUES ($hostname,$1,$3);"
        echo "kikou"
    fi
    rm -f $procdir/snmpgetproc.$$
}

When started, the program read the 4999 lines from SQL table, then should start a maximum of 30 threads to start the "sendSNMPGET" function.

启动时,程序从SQL表中读取4999行,然后启动最多30个线程以启动“sendSNMPGET”函数。

This is not what is hapenning.

这不是必须的。

Console gets crazy and send lots of :

控制台变得疯狂,发送大量:

./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
kikou
kikou
kikou
kikou
kikou
kikou

I have similar functions in others scripts (creating a file for each thread and limiting it with a var) that doesn't have this issue.

我在其他脚本中也有类似的函数(为每个线程创建一个文件,并使用var对其进行限制),但没有这个问题。

2 个解决方案

#1


0  

Type in 'jobs' at shell prompt. How many jobs are left over after you previously ran this script? I am guessing while testing this you have left over jobs after each run of script and after running your script a few times there are too many jobs.

在shell提示中键入“jobs”。在您之前运行这个脚本之后,还剩下多少个作业?我猜,在测试时,您在每次运行脚本之后都留下了作业,并且在多次运行脚本之后,会有太多的作业。

"fork: Resource temporarily unavailable" means you are hitting up against a system limit. I have seen it when out of memory or you could be out of allowed sub-=processes. Type 'ulimit -a' to check the limits.

“fork:资源暂时不可用”意味着您遇到了系统限制。我已经见过它什么时候内存不足或者你可能不允许子=进程。输入“ulimit -a”检查限制。

Also . . . There is nothing in your script which limits temp_ip to just 30 entries either? You maybe could log how big temp_ip is (echo $temp_ip |wc -l). You could use head -30 or mysql to make sure a max of 30 entries are returned?

也。您的脚本中也没有限制temp_ip仅为30个条目的内容吗?您可以记录temp_ip的大小(echo $temp_ip |wc -l)。您可以使用head -30或mysql确保返回最多30个条目?

function testfunc {  echo testfunc;  sleep 30;  echo testfunc END; }
temp_ip="a b
c d
e f
g h
i j
k l
m n
o p
q r
s t
u v
w x
y z
a b
c d
e f
g h
i j
k l
m n
o p
q r
s t
u v
w x
y z
a b
c d
e f
g h"

Now repeat this . . . a few times and check 'jobs'.

现在重复这个…检查几次“工作”。

while read ip codesite;do     { echo $ip; testfunc $ip & }
done<<<"$temp_ip"

With a process ulimit of 1024 I seem to be able to run this 15 times before trouble starts. I get about 450 jobs in list before trouble.

使用ulimit为1024的进程,我似乎可以在问题开始之前运行15次。在遇到麻烦之前,我得到了大约450份工作。

trouble is like this:

问题是这样的:

-bash: fork: retry: Resource temporarily unavailable
testfunc
-bash: fork: retry: No child processes
testfunc
-bash: fork: retry: No child processes
testfunc
-bash: fork: retry: No child processes

#2


0  

I would add a sleep to your loop, which should give each fork enough time to finish. Not the most efficient if you're trying to do it very quickly, but if time doesn't matter so much, then this might be your option.

我会在你的循环中添加一个睡眠,这会给每一个叉子足够的时间来完成。如果你想快速地完成它,不是最有效的,但是如果时间不重要,那么这可能就是你的选择。

#1


0  

Type in 'jobs' at shell prompt. How many jobs are left over after you previously ran this script? I am guessing while testing this you have left over jobs after each run of script and after running your script a few times there are too many jobs.

在shell提示中键入“jobs”。在您之前运行这个脚本之后,还剩下多少个作业?我猜,在测试时,您在每次运行脚本之后都留下了作业,并且在多次运行脚本之后,会有太多的作业。

"fork: Resource temporarily unavailable" means you are hitting up against a system limit. I have seen it when out of memory or you could be out of allowed sub-=processes. Type 'ulimit -a' to check the limits.

“fork:资源暂时不可用”意味着您遇到了系统限制。我已经见过它什么时候内存不足或者你可能不允许子=进程。输入“ulimit -a”检查限制。

Also . . . There is nothing in your script which limits temp_ip to just 30 entries either? You maybe could log how big temp_ip is (echo $temp_ip |wc -l). You could use head -30 or mysql to make sure a max of 30 entries are returned?

也。您的脚本中也没有限制temp_ip仅为30个条目的内容吗?您可以记录temp_ip的大小(echo $temp_ip |wc -l)。您可以使用head -30或mysql确保返回最多30个条目?

function testfunc {  echo testfunc;  sleep 30;  echo testfunc END; }
temp_ip="a b
c d
e f
g h
i j
k l
m n
o p
q r
s t
u v
w x
y z
a b
c d
e f
g h
i j
k l
m n
o p
q r
s t
u v
w x
y z
a b
c d
e f
g h"

Now repeat this . . . a few times and check 'jobs'.

现在重复这个…检查几次“工作”。

while read ip codesite;do     { echo $ip; testfunc $ip & }
done<<<"$temp_ip"

With a process ulimit of 1024 I seem to be able to run this 15 times before trouble starts. I get about 450 jobs in list before trouble.

使用ulimit为1024的进程,我似乎可以在问题开始之前运行15次。在遇到麻烦之前,我得到了大约450份工作。

trouble is like this:

问题是这样的:

-bash: fork: retry: Resource temporarily unavailable
testfunc
-bash: fork: retry: No child processes
testfunc
-bash: fork: retry: No child processes
testfunc
-bash: fork: retry: No child processes

#2


0  

I would add a sleep to your loop, which should give each fork enough time to finish. Not the most efficient if you're trying to do it very quickly, but if time doesn't matter so much, then this might be your option.

我会在你的循环中添加一个睡眠,这会给每一个叉子足够的时间来完成。如果你想快速地完成它,不是最有效的,但是如果时间不重要,那么这可能就是你的选择。