在Solaris/Linux中释放分配的内存。

时间:2022-09-06 12:14:22

I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application.

我编写了一个小程序,并在Solaris/Linux平台下编译它,以度量将此代码应用到应用程序的性能。

The program is written in such a way, initially using a sbrk(0) system call, I have taken base address of the heap region. After that I have allocated 1.5 GB of memory using a malloc system call, Then I used a memcpy system call to copy 1.5 GB of content to the allocated memory area. Then, I freed the allocated memory.

程序是这样编写的,最初使用sbrk(0)系统调用,我获取了堆区域的基本地址。之后,我使用malloc系统调用分配了1.5 GB的内存,然后使用memcpy系统调用将1.5 GB的内容复制到分配的内存区域。然后,释放分配的内存。

After freeing, I used the sbrk(0) system call again to view the heap size.

释放之后,我再次使用sbrk(0)系统调用来查看堆大小。

This is where I get a little confused. In Solaris, even though I freed the memory allocated (nearly 1.5 GB), the heap size of the process is huge. But I run the same application in Linux, after freeing, I found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5 GB.

这让我有点困惑。在Solaris中,即使释放了分配的内存(接近1.5 GB),进程的堆大小仍然很大。但是我在Linux中运行了相同的应用程序,在释放之后,我发现进程的堆大小等于分配1.5 GB之前堆内存的大小。

I know Solaris does not free memory immediately, but I don't know how to tune the Solaris kernel to immediately free the memory after the free() system call.

我知道Solaris不会立即释放内存,但我不知道如何调优Solaris内核,以便在空闲()系统调用之后立即释放内存。

Why don't I have the same problem under Linux?

为什么在Linux下我没有同样的问题呢?

1 个解决方案

#1


3  

I got the answer for the question that i have asked.

我得到了我所问问题的答案。

Application Memory Allocators:

应用程序的内存分配器:

C and C++ developers must manually manage memory allocation and free memory. The default memory allocator is in the libc library.

C和c++开发人员必须手动管理内存分配和空闲内存。默认的内存分配器在libc库中。

Libc Note that after free()is executed, the freed space is made available for further allocation by the application and not returned to the system. Memory is returned to the system only when the application terminates. That's why the application's process size usually never decreases. But for a long-running application, the application process size usually remains in a stable state because the freed memory can be reused. If this is not the case, then most likely the application is leaking memory, that is, allocated memory is used but never freed when no longer in use and the pointer to the allocated memory is not tracked by the application—basically lost.

Libc注意到,在执行了free()之后,被释放的空间可以被应用程序进一步分配,而不会返回到系统中。只有当应用程序终止时,内存才返回给系统。这就是为什么应用程序的进程大小通常不会减少。但是对于长时间运行的应用程序,应用程序的大小通常保持在一个稳定的状态,因为释放的内存可以被重用。如果不是这种情况,那么应用程序很可能是泄漏内存,也就是说,当不再使用时使用已分配内存,但从未释放,并且应用程序没有跟踪到已分配内存的指针,这基本上是丢失的。

The default memory allocator in libc is not good for multi-threaded applications when a concurrent malloc or free operation occurs frequently, especially for multi-threaded C++ applications. This is because creating and destroying C++ objects is part of C++ application development style. When the default libc allocator is used, the heap is protected by a single heap-lock, causing the default allocator not to be scalable for multi-threaded applications due to heavy lock contentions during malloc or free operations. It's easy to detect this problem with Solaris tools, as follows.

当并发malloc或*操作频繁出现时,libc中的默认内存分配程序对多线程应用程序并不适用,特别是对于多线程的c++应用程序。这是因为创建和销毁c++对象是c++应用程序开发风格的一部分。当使用默认的libc分配器时,堆受到一个单堆锁的保护,这导致在malloc或*操作期间,由于大量的锁争用,默认分配器不能用于多线程应用程序。使用Solaris工具很容易发现这个问题,如下所示。

First, use prstat -mL -p to see if the application spends much time on locks; look at the LCK column. For example:

首先,使用prstat -mL -p查看应用程序是否在锁上花费了大量时间;看看LCK柱。例如:

-bash-3.2# prstat -mL -p 14052
   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
 14052 root     0.6 0.7 0.0 0.0 0.0  35 0.0  64 245  13 841   0 test_vector_/721
 14052 root     1.0 0.0 0.0 0.0 0.0  35 0.0  64 287   5 731   0 test_vector_/941
 14052 root     1.0 0.0 0.0 0.0 0.0  35 0.0  64 298   3 680   0 test_vector_/181
 14052 root     1.0 0.1 0.0 0.0 0.0  35 0.0  64 298   3  1K   0 test_vector_/549
 ....

It shows that the application spend about 35 percent of its time waiting for locks.

它显示应用程序大约35%的时间用于等待锁。

Then, using the plockstat(1M) tool, find what locks the application is waiting for. For example, trace the application for 5 seconds with process ID 14052, and then filter the output with the c++filt utility for demangling C++ symbol names. (The c++filt utility is provided with the Sun Studio software.) Filtering through c++filt is not needed if the application is not a C++ application.

然后,使用plockstat(1M)工具,找到应用程序正在等待的锁。例如,使用进程ID 14052跟踪应用程序5秒,然后使用c++filt实用程序对输出进行过滤,以便对c++符号名进行拆解。(c++filt实用程序提供了Sun Studio软件。)如果应用程序不是c++应用程序,则不需要过滤c++filt。

-bash-3.2#  plockstat -e 5 -p 14052 | c++filt
Mutex block
Count     nsec   Lock                         Caller
-------------------------------------------------------------------------------
 9678 166540561 libc.so.1‘libc_malloc_lock   libCrun.so.1‘void operator 
 delete(void*)+0x26

 5530 197179848 libc.so.1‘libc_malloc_lock   libCrun.so.1‘void*operator 
 new(unsigned)+0x38

......

From the preceding, you can see that the heap-lock libc_malloc_lock is heavily contended for and is a likely cause for the scaling issue. The solution for this scaling problem of the libc allocator is to use an improved memory allocator like the libumem library.

在前面,您可以看到heap-lock libc_malloc_lock被严重竞争,这可能是导致缩放问题的原因。libc分配器的这个扩展问题的解决方案是使用改进的内存分配器,比如libumem库。

Also visit: http://developers.sun.com/solaris/articles/solaris_memory.html

还访问:http://developers.sun.com/solaris/articles/solaris_memory.html。

Thanks for all who tried to answer my question, Santhosh.

谢谢所有试图回答我问题的人,Santhosh。

#1


3  

I got the answer for the question that i have asked.

我得到了我所问问题的答案。

Application Memory Allocators:

应用程序的内存分配器:

C and C++ developers must manually manage memory allocation and free memory. The default memory allocator is in the libc library.

C和c++开发人员必须手动管理内存分配和空闲内存。默认的内存分配器在libc库中。

Libc Note that after free()is executed, the freed space is made available for further allocation by the application and not returned to the system. Memory is returned to the system only when the application terminates. That's why the application's process size usually never decreases. But for a long-running application, the application process size usually remains in a stable state because the freed memory can be reused. If this is not the case, then most likely the application is leaking memory, that is, allocated memory is used but never freed when no longer in use and the pointer to the allocated memory is not tracked by the application—basically lost.

Libc注意到,在执行了free()之后,被释放的空间可以被应用程序进一步分配,而不会返回到系统中。只有当应用程序终止时,内存才返回给系统。这就是为什么应用程序的进程大小通常不会减少。但是对于长时间运行的应用程序,应用程序的大小通常保持在一个稳定的状态,因为释放的内存可以被重用。如果不是这种情况,那么应用程序很可能是泄漏内存,也就是说,当不再使用时使用已分配内存,但从未释放,并且应用程序没有跟踪到已分配内存的指针,这基本上是丢失的。

The default memory allocator in libc is not good for multi-threaded applications when a concurrent malloc or free operation occurs frequently, especially for multi-threaded C++ applications. This is because creating and destroying C++ objects is part of C++ application development style. When the default libc allocator is used, the heap is protected by a single heap-lock, causing the default allocator not to be scalable for multi-threaded applications due to heavy lock contentions during malloc or free operations. It's easy to detect this problem with Solaris tools, as follows.

当并发malloc或*操作频繁出现时,libc中的默认内存分配程序对多线程应用程序并不适用,特别是对于多线程的c++应用程序。这是因为创建和销毁c++对象是c++应用程序开发风格的一部分。当使用默认的libc分配器时,堆受到一个单堆锁的保护,这导致在malloc或*操作期间,由于大量的锁争用,默认分配器不能用于多线程应用程序。使用Solaris工具很容易发现这个问题,如下所示。

First, use prstat -mL -p to see if the application spends much time on locks; look at the LCK column. For example:

首先,使用prstat -mL -p查看应用程序是否在锁上花费了大量时间;看看LCK柱。例如:

-bash-3.2# prstat -mL -p 14052
   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/LWPID
 14052 root     0.6 0.7 0.0 0.0 0.0  35 0.0  64 245  13 841   0 test_vector_/721
 14052 root     1.0 0.0 0.0 0.0 0.0  35 0.0  64 287   5 731   0 test_vector_/941
 14052 root     1.0 0.0 0.0 0.0 0.0  35 0.0  64 298   3 680   0 test_vector_/181
 14052 root     1.0 0.1 0.0 0.0 0.0  35 0.0  64 298   3  1K   0 test_vector_/549
 ....

It shows that the application spend about 35 percent of its time waiting for locks.

它显示应用程序大约35%的时间用于等待锁。

Then, using the plockstat(1M) tool, find what locks the application is waiting for. For example, trace the application for 5 seconds with process ID 14052, and then filter the output with the c++filt utility for demangling C++ symbol names. (The c++filt utility is provided with the Sun Studio software.) Filtering through c++filt is not needed if the application is not a C++ application.

然后,使用plockstat(1M)工具,找到应用程序正在等待的锁。例如,使用进程ID 14052跟踪应用程序5秒,然后使用c++filt实用程序对输出进行过滤,以便对c++符号名进行拆解。(c++filt实用程序提供了Sun Studio软件。)如果应用程序不是c++应用程序,则不需要过滤c++filt。

-bash-3.2#  plockstat -e 5 -p 14052 | c++filt
Mutex block
Count     nsec   Lock                         Caller
-------------------------------------------------------------------------------
 9678 166540561 libc.so.1‘libc_malloc_lock   libCrun.so.1‘void operator 
 delete(void*)+0x26

 5530 197179848 libc.so.1‘libc_malloc_lock   libCrun.so.1‘void*operator 
 new(unsigned)+0x38

......

From the preceding, you can see that the heap-lock libc_malloc_lock is heavily contended for and is a likely cause for the scaling issue. The solution for this scaling problem of the libc allocator is to use an improved memory allocator like the libumem library.

在前面,您可以看到heap-lock libc_malloc_lock被严重竞争,这可能是导致缩放问题的原因。libc分配器的这个扩展问题的解决方案是使用改进的内存分配器,比如libumem库。

Also visit: http://developers.sun.com/solaris/articles/solaris_memory.html

还访问:http://developers.sun.com/solaris/articles/solaris_memory.html。

Thanks for all who tried to answer my question, Santhosh.

谢谢所有试图回答我问题的人,Santhosh。