如何增加Twisted的连接池大小?

时间:2021-07-24 20:50:55

I'm using Twisted 8.1.0 as socket server engine. Reactor - epoll. Database server is MySQL 5.0.67. OS - Ubuntu Linux 8.10 32-bit

我使用Twisted 8.1.0作为套接字服务器引擎。反应堆 - epoll。数据库服务器是MySQL 5.0.67。操作系统 - Ubuntu Linux 8.10 32位

in /etc/mysql/my.cnf :

在/etc/mysql/my.cnf中:

max_connections        = 1000

in source code:

在源代码中:

adbapi.ConnectionPool("MySQLdb", ..., use_unicode=True, charset='utf8', 
                      cp_min=3, cp_max=700, cp_noisy=False)

But in reality I can see only 200 (or less) open connections (SHOW PROCESSLIST) when application is running under heavy load. It is not enough for my app :(

但实际上,当应用程序在高负载下运行时,我只能看到200个(或更少)打开的连接(SHOW PROCESSLIST)。这还不够我的应用程序:(

As I see this is limit for the thread pool. Any ideas?

我认为这是线程池的限制。有任何想法吗?

1 个解决方案

#1


As you suspect, this is probably a threading issue. cp_max sets an upper limit for the number of threads in the thread pool, however, your process is very likely running out of memory well below this limit, in your case around 200 threads. Because each thread has its own stack, the total memory being used by your process hits the system limit and no more threads can be created.

正如您所怀疑的那样,这可能是一个线程问题。 cp_max设置了线程池中线程数的上限,但是,你的进程很可能在远低于这个限制的情况下内存不足,在你的情况下大约有200个线程。因为每个线程都有自己的堆栈,所以进程使用的总内存会达到系统限制,不能再创建线程。

You can check this by adjusting the stack size ulimit setting (I'm using bash) prior to running your program, i.e.

您可以在运行程序之前通过调整堆栈大小ulimit设置(我正在使用bash)来检查这一点,即

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
max nice                        (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 32750
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
max rt priority                 (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32750
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

You can see that the default stack size is 10240K on my machine and I have found that I can create about 300 threads with this setting. Adjusting the stack size down to 1024K (using ulimit -s 1024) I can create about 3000 threads.

你可以看到我的机器上的默认堆栈大小是10240K,我发现我可以用这个设置创建大约300个线程。将堆栈大小调整为1024K(使用ulimit -s 1024)我可以创建大约3000个线程。

You can get some idea about the thread creation limits on you system using this script:

您可以使用此脚本了解系统上的线程创建限制:

from thread import start_new_thread
from time import sleep

def sleeper():
    try:
        while 1:
            sleep(10000)
    except:
        if running: raise

def test():
    global running
    n = 0
    running = True
    try:
        while 1:
            start_new_thread(sleeper, ())
            n += 1
    except Exception, e:
        running = False
        print 'Exception raised:', e
    print 'Biggest number of threads:', n

if __name__ == '__main__':
    test()

Whether this solves your problem will depend on the memory requirements of the ConnectionPool threads.

这是否能解决您的问题取决于ConnectionPool线程的内存要求。

#1


As you suspect, this is probably a threading issue. cp_max sets an upper limit for the number of threads in the thread pool, however, your process is very likely running out of memory well below this limit, in your case around 200 threads. Because each thread has its own stack, the total memory being used by your process hits the system limit and no more threads can be created.

正如您所怀疑的那样,这可能是一个线程问题。 cp_max设置了线程池中线程数的上限,但是,你的进程很可能在远低于这个限制的情况下内存不足,在你的情况下大约有200个线程。因为每个线程都有自己的堆栈,所以进程使用的总内存会达到系统限制,不能再创建线程。

You can check this by adjusting the stack size ulimit setting (I'm using bash) prior to running your program, i.e.

您可以在运行程序之前通过调整堆栈大小ulimit设置(我正在使用bash)来检查这一点,即

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
max nice                        (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 32750
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
max rt priority                 (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32750
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

You can see that the default stack size is 10240K on my machine and I have found that I can create about 300 threads with this setting. Adjusting the stack size down to 1024K (using ulimit -s 1024) I can create about 3000 threads.

你可以看到我的机器上的默认堆栈大小是10240K,我发现我可以用这个设置创建大约300个线程。将堆栈大小调整为1024K(使用ulimit -s 1024)我可以创建大约3000个线程。

You can get some idea about the thread creation limits on you system using this script:

您可以使用此脚本了解系统上的线程创建限制:

from thread import start_new_thread
from time import sleep

def sleeper():
    try:
        while 1:
            sleep(10000)
    except:
        if running: raise

def test():
    global running
    n = 0
    running = True
    try:
        while 1:
            start_new_thread(sleeper, ())
            n += 1
    except Exception, e:
        running = False
        print 'Exception raised:', e
    print 'Biggest number of threads:', n

if __name__ == '__main__':
    test()

Whether this solves your problem will depend on the memory requirements of the ConnectionPool threads.

这是否能解决您的问题取决于ConnectionPool线程的内存要求。