如何度量应用程序或进程的实际内存使用情况?

时间:2021-09-11 17:00:35

This question is covered here in great detail.

这个问题在这里有详细的讨论。

How do you measure the memory usage of an application or process in Linux?

如何测量Linux中应用程序或进程的内存使用情况?

From the blog article of Understanding memory usage on Linux, ps is not an accurate tool to use for this intent.

从了解Linux上的内存使用情况的博客文章来看,ps并不是用于此目的的准确工具。

Why ps is "wrong"

为什么ps是“错误”的

Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely wrong.

根据您的看法,ps没有报告进程的实际内存使用情况。它真正做的是显示如果只有一个进程在运行,每个进程将占用多少实际内存。当然,典型的Linux机器在任何时候都有几十个进程在运行,这意味着ps报告的VSZ和RSS数字几乎肯定是错误的。

30 个解决方案

#1


289  

With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:

使用ps或类似的工具,您只能获得进程分配的内存页的数量。这个数字是正确的,但是:

  • does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it

    不反映应用程序实际使用的内存数量,只反映为它保留的内存数量吗

  • can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries

    如果页面被多个线程共享,或者使用动态链接库,会产生误导吗

If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of valgrind is called 'massif':

如果您真的想知道应用程序实际使用了多少内存,您需要在分析器中运行它。例如,valgrind可以让您了解所使用的内存量,更重要的是,了解程序中可能存在的内存泄漏。valgrind的堆剖面仪称为massif:

Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.

Massif是一个堆分析器。它通过获取程序堆的常规快照来执行详细的堆分析。它生成一个图表,显示随着时间的推移堆的使用情况,包括程序中哪些部分负责最多的内存分配。图中还附带了一个文本或HTML文件,其中包含更多信息,用于确定分配的内存最多。Massif运行程序的速度比平常慢了20倍。

As explained in the valgrind documentation, you need to run the program through valgrind:

正如在valgrind文档中解释的,您需要通过valgrind运行程序:

valgrind --tool=massif <executable> <arguments>

Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with valgrind, to be of great help already.

Massif编写内存使用快照的转储(例如Massif .out.12345)。它们(1)为每个快照提供一个内存使用时间线(2),记录程序内存中分配的位置。分析这些文件的一个很好的图形工具是大型可视化工具。但是我发现ms_print这个简单的基于文本的工具已经提供了很大的帮助。

To find memory leaks, use the (default) memcheck tool of valgrind.

要查找内存泄漏,请使用valgrind的(默认)memcheck工具。

#2


207  

Try the pmap command:

试试pmap命令:

sudo pmap -x <process pid>

#3


174  

Hard to tell for sure, but here are two "close" things that can help.

很难确定,但这里有两个“接近”的东西可以帮助你。

$ ps aux 

will give you Virtual Size (VSZ)

将提供虚拟大小(VSZ)

You can also get detailed stats from /proc file-system by going to /proc/$pid/status

您还可以通过/proc/$pid/status从/proc文件系统获得详细的统计信息

The most important is the VmSize, which should be close to what ps aux gives.

最重要的是VmSize,它应该接近ps aux提供的内容。

/proc/19420$ cat status
Name:   firefox
State:  S (sleeping)
Tgid:   19420
Pid:    19420
PPid:   1
TracerPid:  0
Uid:    1000    1000    1000    1000
Gid:    1000    1000    1000    1000
FDSize: 256
Groups: 4 6 20 24 25 29 30 44 46 107 109 115 124 1000 
VmPeak:   222956 kB
VmSize:   212520 kB
VmLck:         0 kB
VmHWM:    127912 kB
VmRSS:    118768 kB
VmData:   170180 kB
VmStk:       228 kB
VmExe:        28 kB
VmLib:     35424 kB
VmPTE:       184 kB
Threads:    8
SigQ:   0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000020001000
SigCgt: 000000018000442f
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed:   03
Mems_allowed:   1
voluntary_ctxt_switches:    63422
nonvoluntary_ctxt_switches: 7171

#4


122  

In recent versions of linux, use the smaps subsystem. For example, for a process with a PID of 1234:

在linux的最新版本中,使用smaps子系统。例如,对于PID为1234的进程:

cat /proc/1234/smaps

It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.

它会确切地告诉你它当时使用了多少内存。更重要的是,它将内存划分为private和shared,这样您就可以知道程序的实例正在使用多少内存,而不包括程序的多个实例之间共享的内存。

#5


116  

There is no easy way to calculate this. But some people have tried to get some good answers:

没有简单的计算方法。但有些人试图找到一些好的答案:

#6


80  

Use smem, which is an alternative to ps which calculates the USS and PSS per process. What you want is probably the PSS.

使用smem,这是ps的一种替代方法,它计算每个进程的USS和PSS。你想要的可能是PSS。

  • USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but is helpful when you want to ignore shared memory.

    独特的设置大小。这是进程唯一的未共享内存的数量(将其视为唯一内存的U)。它不包括共享内存。因此,这会少报进程使用的内存数量,但是当您想要忽略共享内存时,这是很有帮助的。

  • PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of other processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.

    比例设置大小。这就是你想要的。它将唯一内存(USS)与共享内存的比例除以共享该内存的其他进程的数量相加。因此,它将为您提供每个进程实际使用了多少物理内存的精确表示—共享内存真正表示为共享内存。P表示物理记忆。

How this compares to RSS as reported by ps and other utilties:

与ps和其他用途报告的RSS相比:

  • RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP(fastcgi/FPM) processes.
  • RSS -常驻设置大小。这是每个进程使用的共享内存和非共享内存的数量。如果任何进程共享内存,那么这将高估实际使用的内存数量,因为相同的共享内存将被多次计算——在共享相同内存的另一个进程中再次出现。因此,它是相当不可靠的,特别是当高内存进程有很多分支时——这在服务器中很常见,比如Apache或PHP(fastcgi/FPM)进程。

Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the python-matplotlib recommended dependency.

注意:smem还可以(可选地)输出图形,如饼图等。在我看来,你什么都不需要。如果您只想从命令行使用它,就像使用ps -A - v一样,那么您不需要安装python-matplotlib推荐的依赖项。

#7


43  

What about time ?

时间呢?

Not the Bash builtin time but the one you can find with which time, for example /usr/bin/time

不是Bash构建时间,而是您可以找到的时间,例如/usr/bin/time

Here's what it covers, on a simple ls :

这是它所涵盖的,在一个简单的ls:

$ /usr/bin/time --verbose ls
(...)
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2372
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 121
Voluntary context switches: 2
Involuntary context switches: 9
Swaps: 0
File system inputs: 256
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0

#8


37  

This is an excellent summary of the tools and problems: archive.org link

这是一个工具和问题的优秀总结:archive.org链接

I'll quote it, so that more devs will actually read it.

我将引用它,以便更多的开发人员真正阅读它。

If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.

如果您想分析整个系统的内存使用情况,或者想彻底分析一个应用程序的内存使用情况(不仅仅是它的堆使用情况),请使用exmap。对于整个系统的分析,找出最有效使用的流程,他们在实践中使用最多的内存,找到具有最高可写操作的流程,他们创建最多的数据(因此可能会泄漏或在数据使用中非常低效)。选择这样的应用程序并在第二个listview中分析它的映射。有关更多细节,请参见exmap部分。还可以使用xrestop检查X资源的高使用率,特别是当X服务器的进程占用大量内存时。有关详细信息,请参阅xrestop部分。

If you want to detect leaks, use valgrind or possibly kmtrace.

如果您想要检测泄漏,请使用valgrind或kmtrace。

If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.

如果您想分析应用程序的堆(malloc等)使用,可以在memprof中运行它,也可以使用kmtrace,对应用程序进行概要分析,并搜索函数调用树以获得最大的分配。更多细节请参见他们的章节。

#9


32  

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' |cut -d "" -f2 | cut -d "-" -f1

Use this as root and you can get a clear output for memory usage by each process.

OUTPUT EXAMPLE:

     0.00 Mb COMMAND 
  1288.57 Mb /usr/lib/firefox
   821.68 Mb /usr/lib/chromium/chromium 
   762.82 Mb /usr/lib/chromium/chromium 
   588.36 Mb /usr/sbin/mysqld 
   547.55 Mb /usr/lib/chromium/chromium 
   523.92 Mb /usr/lib/tracker/tracker
   476.59 Mb /usr/lib/chromium/chromium 
   446.41 Mb /usr/bin/gnome
   421.62 Mb /usr/sbin/libvirtd 
   405.11 Mb /usr/lib/chromium/chromium 
   302.60 Mb /usr/lib/chromium/chromium 
   291.46 Mb /usr/lib/chromium/chromium 
   284.56 Mb /usr/lib/chromium/chromium 
   238.93 Mb /usr/lib/tracker/tracker
   223.21 Mb /usr/lib/chromium/chromium 
   197.99 Mb /usr/lib/chromium/chromium 
   194.07 Mb conky 
   191.92 Mb /usr/lib/chromium/chromium 
   190.72 Mb /usr/bin/mongod 
   169.06 Mb /usr/lib/chromium/chromium 
   155.11 Mb /usr/bin/gnome
   136.02 Mb /usr/lib/chromium/chromium 
   125.98 Mb /usr/lib/chromium/chromium 
   103.98 Mb /usr/lib/chromium/chromium 
    93.22 Mb /usr/lib/tracker/tracker
    89.21 Mb /usr/lib/gnome
    80.61 Mb /usr/bin/gnome
    77.73 Mb /usr/lib/evolution/evolution
    76.09 Mb /usr/lib/evolution/evolution
    72.21 Mb /usr/lib/gnome
    69.40 Mb /usr/lib/evolution/evolution
    68.84 Mb nautilus
    68.08 Mb zeitgeist
    60.97 Mb /usr/lib/tracker/tracker
    59.65 Mb /usr/lib/evolution/evolution
    57.68 Mb apt
    55.23 Mb /usr/lib/gnome
    53.61 Mb /usr/lib/evolution/evolution
    53.07 Mb /usr/lib/gnome
    52.83 Mb /usr/lib/gnome
    51.02 Mb /usr/lib/udisks2/udisksd 
    50.77 Mb /usr/lib/evolution/evolution
    50.53 Mb /usr/lib/gnome
    50.45 Mb /usr/lib/gvfs/gvfs
    50.36 Mb /usr/lib/packagekit/packagekitd 
    50.14 Mb /usr/lib/gvfs/gvfs
    48.95 Mb /usr/bin/Xwayland :1024 
    46.21 Mb /usr/bin/gnome
    42.43 Mb /usr/bin/zeitgeist
    42.29 Mb /usr/lib/gnome
    41.97 Mb /usr/lib/gnome
    41.64 Mb /usr/lib/gvfs/gvfsd
    41.63 Mb /usr/lib/gvfs/gvfsd
    41.55 Mb /usr/lib/gvfs/gvfsd
    41.48 Mb /usr/lib/gvfs/gvfsd
    39.87 Mb /usr/bin/python /usr/bin/chrome
    37.45 Mb /usr/lib/xorg/Xorg vt2 
    36.62 Mb /usr/sbin/NetworkManager 
    35.63 Mb /usr/lib/caribou/caribou 
    34.79 Mb /usr/lib/tracker/tracker
    33.88 Mb /usr/sbin/ModemManager 
    33.77 Mb /usr/lib/gnome
    33.61 Mb /usr/lib/upower/upowerd 
    33.53 Mb /usr/sbin/gdm3 
    33.37 Mb /usr/lib/gvfs/gvfsd
    33.36 Mb /usr/lib/gvfs/gvfs
    33.23 Mb /usr/lib/gvfs/gvfs
    33.15 Mb /usr/lib/at
    33.15 Mb /usr/lib/at
    30.03 Mb /usr/lib/colord/colord 
    29.62 Mb /usr/lib/apt/methods/https 
    28.06 Mb /usr/lib/zeitgeist/zeitgeist
    27.29 Mb /usr/lib/policykit
    25.55 Mb /usr/lib/gvfs/gvfs
    25.55 Mb /usr/lib/gvfs/gvfs
    25.23 Mb /usr/lib/accountsservice/accounts
    25.18 Mb /usr/lib/gvfs/gvfsd 
    25.15 Mb /usr/lib/gvfs/gvfs
    25.15 Mb /usr/lib/gvfs/gvfs
    25.12 Mb /usr/lib/gvfs/gvfs
    25.10 Mb /usr/lib/gnome
    25.10 Mb /usr/lib/gnome
    25.07 Mb /usr/lib/gvfs/gvfsd 
    24.99 Mb /usr/lib/gvfs/gvfs
    23.26 Mb /usr/lib/chromium/chromium 
    22.09 Mb /usr/bin/pulseaudio 
    19.01 Mb /usr/bin/pulseaudio 
    18.62 Mb (sd
    18.46 Mb (sd
    18.30 Mb /sbin/init 
    18.17 Mb /usr/sbin/rsyslogd 
    17.50 Mb gdm
    17.42 Mb gdm
    17.09 Mb /usr/lib/dconf/dconf
    17.09 Mb /usr/lib/at
    17.06 Mb /usr/lib/gvfs/gvfsd
    16.98 Mb /usr/lib/at
    16.91 Mb /usr/lib/gdm3/gdm
    16.86 Mb /usr/lib/gvfs/gvfsd
    16.86 Mb /usr/lib/gdm3/gdm
    16.85 Mb /usr/lib/dconf/dconf
    16.85 Mb /usr/lib/dconf/dconf
    16.73 Mb /usr/lib/rtkit/rtkit
    16.69 Mb /lib/systemd/systemd
    13.13 Mb /usr/lib/chromium/chromium 
    13.13 Mb /usr/lib/chromium/chromium 
    10.92 Mb anydesk 
     8.54 Mb /sbin/lvmetad 
     7.43 Mb /usr/sbin/apache2 
     6.82 Mb /usr/sbin/apache2 
     6.77 Mb /usr/sbin/apache2 
     6.73 Mb /usr/sbin/apache2 
     6.66 Mb /usr/sbin/apache2 
     6.64 Mb /usr/sbin/apache2 
     6.63 Mb /usr/sbin/apache2 
     6.62 Mb /usr/sbin/apache2 
     6.51 Mb /usr/sbin/apache2 
     6.25 Mb /usr/sbin/apache2 
     6.22 Mb /usr/sbin/apache2 
     3.92 Mb bash 
     3.14 Mb bash 
     2.97 Mb bash 
     2.95 Mb bash 
     2.93 Mb bash 
     2.91 Mb bash 
     2.86 Mb bash 
     2.86 Mb bash 
     2.86 Mb bash 
     2.84 Mb bash 
     2.84 Mb bash 
     2.45 Mb /lib/systemd/systemd
     2.30 Mb (sd
     2.28 Mb /usr/bin/dbus
     1.84 Mb /usr/bin/dbus
     1.46 Mb ps 
     1.21 Mb openvpn hackthebox.ovpn 
     1.16 Mb /sbin/dhclient 
     1.16 Mb /sbin/dhclient 
     1.09 Mb /lib/systemd/systemd 
     0.98 Mb /sbin/mount.ntfs /dev/sda3 /media/n0bit4/Data 
     0.97 Mb /lib/systemd/systemd 
     0.96 Mb /lib/systemd/systemd 
     0.89 Mb /usr/sbin/smartd 
     0.77 Mb /usr/bin/dbus
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.74 Mb /usr/bin/dbus
     0.71 Mb /usr/lib/apt/methods/http 
     0.68 Mb /bin/bash /usr/bin/mysqld_safe 
     0.68 Mb /sbin/wpa_supplicant 
     0.66 Mb /usr/bin/dbus
     0.61 Mb /lib/systemd/systemd
     0.54 Mb /usr/bin/dbus
     0.46 Mb /usr/sbin/cron 
     0.45 Mb /usr/sbin/irqbalance 
     0.43 Mb logger 
     0.41 Mb awk { hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" } 
     0.40 Mb /usr/bin/ssh
     0.34 Mb /usr/lib/chromium/chrome
     0.32 Mb cut 
     0.32 Mb cut 
     0.00 Mb [kthreadd] 
     0.00 Mb [ksoftirqd/0] 
     0.00 Mb [kworker/0:0H] 
     0.00 Mb [rcu_sched] 
     0.00 Mb [rcu_bh] 
     0.00 Mb [migration/0] 
     0.00 Mb [lru
     0.00 Mb [watchdog/0] 
     0.00 Mb [cpuhp/0] 
     0.00 Mb [cpuhp/1] 
     0.00 Mb [watchdog/1] 
     0.00 Mb [migration/1] 
     0.00 Mb [ksoftirqd/1] 
     0.00 Mb [kworker/1:0H] 
     0.00 Mb [cpuhp/2] 
     0.00 Mb [watchdog/2] 
     0.00 Mb [migration/2] 
     0.00 Mb [ksoftirqd/2] 
     0.00 Mb [kworker/2:0H] 
     0.00 Mb [cpuhp/3] 
     0.00 Mb [watchdog/3] 
     0.00 Mb [migration/3] 
     0.00 Mb [ksoftirqd/3] 
     0.00 Mb [kworker/3:0H] 
     0.00 Mb [kdevtmpfs] 
     0.00 Mb [netns] 
     0.00 Mb [khungtaskd] 
     0.00 Mb [oom_reaper] 
     0.00 Mb [writeback] 
     0.00 Mb [kcompactd0] 
     0.00 Mb [ksmd] 
     0.00 Mb [khugepaged] 
     0.00 Mb [crypto] 
     0.00 Mb [kintegrityd] 
     0.00 Mb [bioset] 
     0.00 Mb [kblockd] 
     0.00 Mb [devfreq_wq] 
     0.00 Mb [watchdogd] 
     0.00 Mb [kswapd0] 
     0.00 Mb [vmstat] 
     0.00 Mb [kthrotld] 
     0.00 Mb [ipv6_addrconf] 
     0.00 Mb [acpi_thermal_pm] 
     0.00 Mb [ata_sff] 
     0.00 Mb [scsi_eh_0] 
     0.00 Mb [scsi_tmf_0] 
     0.00 Mb [scsi_eh_1] 
     0.00 Mb [scsi_tmf_1] 
     0.00 Mb [scsi_eh_2] 
     0.00 Mb [scsi_tmf_2] 
     0.00 Mb [scsi_eh_3] 
     0.00 Mb [scsi_tmf_3] 
     0.00 Mb [scsi_eh_4] 
     0.00 Mb [scsi_tmf_4] 
     0.00 Mb [scsi_eh_5] 
     0.00 Mb [scsi_tmf_5] 
     0.00 Mb [bioset] 
     0.00 Mb [kworker/1:1H] 
     0.00 Mb [kworker/3:1H] 
     0.00 Mb [kworker/0:1H] 
     0.00 Mb [kdmflush] 
     0.00 Mb [bioset] 
     0.00 Mb [kdmflush] 
     0.00 Mb [bioset] 
     0.00 Mb [jbd2/sda5
     0.00 Mb [ext4
     0.00 Mb [kworker/2:1H] 
     0.00 Mb [kauditd] 
     0.00 Mb [bioset] 
     0.00 Mb [drbd
     0.00 Mb [irq/27
     0.00 Mb [i915/signal:0] 
     0.00 Mb [i915/signal:1] 
     0.00 Mb [i915/signal:2] 
     0.00 Mb [ttm_swap] 
     0.00 Mb [cfg80211] 
     0.00 Mb [kworker/u17:0] 
     0.00 Mb [hci0] 
     0.00 Mb [hci0] 
     0.00 Mb [kworker/u17:1] 
     0.00 Mb [iprt
     0.00 Mb [iprt
     0.00 Mb [kworker/1:0] 
     0.00 Mb [kworker/3:0] 
     0.00 Mb [kworker/0:0] 
     0.00 Mb [kworker/2:0] 
     0.00 Mb [kworker/u16:0] 
     0.00 Mb [kworker/u16:2] 
     0.00 Mb [kworker/3:2] 
     0.00 Mb [kworker/2:1] 
     0.00 Mb [kworker/1:2] 
     0.00 Mb [kworker/0:2] 
     0.00 Mb [kworker/2:2] 
     0.00 Mb [kworker/0:1] 
     0.00 Mb [scsi_eh_6] 
     0.00 Mb [scsi_tmf_6] 
     0.00 Mb [usb
     0.00 Mb [bioset] 
     0.00 Mb [kworker/3:1] 
     0.00 Mb [kworker/u16:1] 

#10


22  

Beside the solutions listed in thy answers, you can use the Linux command "top"; It provides a dynamic real-time view of the running system, it gives the CPU and Memory usage, for the whole system along with for every program, in percentage:

除了您的答案中列出的解决方案之外,您还可以使用Linux命令“top”;它提供了运行系统的动态实时视图,它提供了CPU和内存使用情况,包括整个系统和每个程序的百分比:

top

to filter by a program pid:

通过程序pid进行过滤:

top -p <PID>

to filter by a program name:

按程序名过滤:

top | grep <PROCESS NAME>

"top" provides also some fields such as:

“top”还提供了如下一些领域:

VIRT -- Virtual Image (kb) :The total amount of virtual memory used by the task

VIRT——虚拟映像(kb):任务使用的虚拟内存总量。

RES -- Resident size (kb): The non-swapped physical memory a task has used ; RES = CODE + DATA.

RES——驻留大小(kb):任务使用的非交换物理内存;RES =代码+数据。

DATA -- Data+Stack size (kb): The amount of physical memory devoted to other than executable code, also known as the 'data resident set' size or DRS.

数据——数据+堆栈大小(kb):用于除可执行代码之外的物理内存的数量,也称为“数据驻留集”大小或DRS。

SHR -- Shared Mem size (kb): The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.

SHR——共享Mem大小(kb):任务使用的共享内存数量。它仅仅反映了可能与其他进程共享的内存。

Reference here.

参考这里。

#11


17  

There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under linux use shared libraries. For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' ( if you could isolate it) ? How about libc? Or all these other libs that are required to run 'ls'?

对此没有一个单一的答案,因为您无法精确地确定进程使用的内存数量。linux下的大多数进程使用共享库。例如,假设您希望计算“ls”进程的内存使用情况。您是否只计算可执行的“ls”所使用的内存(如果可以隔离它的话)?libc怎么样?或者运行“ls”所需的所有其他lib ?

linux-gate.so.1 =>  (0x00ccb000)
librt.so.1 => /lib/librt.so.1 (0x06bc7000)
libacl.so.1 => /lib/libacl.so.1 (0x00230000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00162000)
libc.so.6 => /lib/libc.so.6 (0x00b40000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00cb4000)
/lib/ld-linux.so.2 (0x00b1d000)
libattr.so.1 => /lib/libattr.so.1 (0x00229000)
libdl.so.2 => /lib/libdl.so.2 (0x00cae000)
libsepol.so.1 => /lib/libsepol.so.1 (0x0011a000)

You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.

您可能会说它们是由其他进程共享的,但是如果没有加载它们,“ls”就不能在系统上运行。

Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough info of the memory usage AT a single time. On the other hand, valgrind will give you a better profile of the memory usage throughout the lifetime of the program

另外,如果您需要知道一个进程需要多少内存才能进行容量规划,那么您必须计算每个进程使用的额外副本的数量。我认为/proc/PID/status可以在一次中提供足够的内存使用信息。另一方面,valgrind可以让您更好地了解程序整个生命周期内的内存使用情况

#12


14  

If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.

如果您的代码是在C或c++中,那么您可能可以使用getrusage(),它返回关于进程的内存和时间使用的各种统计信息。

Not all platforms support this though and will return 0 values for the memory-use options.

但并非所有平台都支持这一点,并将为内存使用选项返回0值。

Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).

相反,您可以查看在/proc/[pid]/statm中创建的虚拟文件(其中[pid]被您的进程id所替代)。

This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file.

这个文件看起来像一个包含7个整数的文本文件。您可能对这个文件中的第一个(所有内存使用)和第六个(数据内存使用)号最感兴趣。

#13


11  

Valgrind can show detailed information but it slows down the target application significantly, and most of the time it changes the behavior of the app.
Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.

Valgrind可以显示详细的信息,但它会显著降低目标应用程序的速度,而且大多数时候它会改变app的行为。Exmap是我还不知道的,但是看起来你需要一个内核模块来获取信息,这可能是一个障碍。

I assume what everyone wants to know WRT "memory usage" is the following...
In linux, the amount of physical memory a single process might use can be roughly divided into following categories.

我假设每个人都想知道WRT的“内存使用”是这样的……在linux中,单个进程可能使用的物理内存可以大致分为以下几类。

  • M.a anonymous mapped memory

    M。一个匿名内存映射

    • .p private
      • .d dirty == malloc/mmapped heap and stack allocated and written memory
      • .d dirty == = malloc/ mmapping堆和分配和写入内存的堆栈
      • .c clean == malloc/mmapped heap and stack memory once allocated, written, then freed, but not reclaimed yet
      • .c clean == malloc/ mmapping堆和堆栈内存一旦分配、写入、释放,但尚未回收
    • .p private .d dirty == malloc/mmap堆和堆栈分配和写入内存。c clean == malloc/mmap堆和堆栈内存,一旦分配,写入,然后释放,但还没有被回收。
    • .s shared
      • .d dirty == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
      • .d dirty == = malloc/mmaped堆可以在进程之间进行写时复制和共享(编辑)
      • .c clean == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
      • .c clean == malloc/mmaped堆可以在进程(编辑)中获得复制和共享
    • .s共享的.d dirty == = malloc/mmaped heap可以在进程间进行写时复制和共享(编辑).c clean == malloc/mmaped heap可以在进程间进行写时复制和共享(编辑)
  • M.n named mapped memory

    M。n指定映射的内存

    • .p private
      • .d dirty == file mmapped written memory private
      • .d dirty ==文件mmapping写入内存私有
      • .c clean == mapped program/library text private mapped
      • .c clean ==映射程序/库文本私有映射
    • .p private .d dirty ==文件mmaps写入内存私有.c clean =映射程序/库文本私有映射
    • .s shared
      • .d dirty == file mmapped written memory shared
      • .d dirty ==文件mmaps写入内存共享
      • .c clean == mapped library text shared mapped
      • .c clean ==映射的库文本共享映射。
    • .s共享的。d dirty == =文件mmapping写入内存共享。c clean =映射库文本共享映射

Utility included in Android called showmap is quite useful

Android中包含的名为showmap的实用程序非常有用

virtual                    shared   shared   private  private
size     RSS      PSS      clean    dirty    clean    dirty    object
-------- -------- -------- -------- -------- -------- -------- ------------------------------
       4        0        0        0        0        0        0 0:00 0                  [vsyscall]
       4        4        0        4        0        0        0                         [vdso]
      88       28       28        0        0        4       24                         [stack]
      12       12       12        0        0        0       12 7909                    /lib/ld-2.11.1.so
      12        4        4        0        0        0        4 89529                   /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
      28        0        0        0        0        0        0 86661                   /usr/lib/gconv/gconv-modules.cache
       4        0        0        0        0        0        0 87660                   /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
       4        0        0        0        0        0        0 89528                   /usr/lib/locale/en_US.utf8/LC_TELEPHONE
       4        0        0        0        0        0        0 89527                   /usr/lib/locale/en_US.utf8/LC_ADDRESS
       4        0        0        0        0        0        0 87717                   /usr/lib/locale/en_US.utf8/LC_NAME
       4        0        0        0        0        0        0 87873                   /usr/lib/locale/en_US.utf8/LC_PAPER
       4        0        0        0        0        0        0 13879                   /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
       4        0        0        0        0        0        0 89526                   /usr/lib/locale/en_US.utf8/LC_MONETARY
       4        0        0        0        0        0        0 89525                   /usr/lib/locale/en_US.utf8/LC_TIME
       4        0        0        0        0        0        0 11378                   /usr/lib/locale/en_US.utf8/LC_NUMERIC
    1156        8        8        0        0        4        4 11372                   /usr/lib/locale/en_US.utf8/LC_COLLATE
     252        0        0        0        0        0        0 11321                   /usr/lib/locale/en_US.utf8/LC_CTYPE
     128       52        1       52        0        0        0 7909                    /lib/ld-2.11.1.so
    2316       32       11       24        0        0        8 7986                    /lib/libncurses.so.5.7
    2064        8        4        4        0        0        4 7947                    /lib/libdl-2.11.1.so
    3596      472       46      440        0        4       28 7933                    /lib/libc-2.11.1.so
    2084        4        0        4        0        0        0 7995                    /lib/libnss_compat-2.11.1.so
    2152        4        0        4        0        0        0 7993                    /lib/libnsl-2.11.1.so
    2092        0        0        0        0        0        0 8009                    /lib/libnss_nis-2.11.1.so
    2100        0        0        0        0        0        0 7999                    /lib/libnss_files-2.11.1.so
    3752     2736     2736        0        0      864     1872                         [heap]
      24       24       24        0        0        0       24 [anon]
     916      616      131      584        0        0       32                         /bin/bash
-------- -------- -------- -------- -------- -------- -------- ------------------------------
   22816     4004     3005     1116        0      876     2012 TOTAL

#14


8  

#!/bin/ksh
#
# Returns total memory used by process $1 in kb.
#
# See /proc/NNNN/smaps if you want to do something
# more interesting.
#

IFS=$'\n'

for line in $(</proc/$1/smaps)
do
   [[ $line =~ ^Size:\s+(\S+) ]] && ((kb += ${.sh.match[1]}))
done

print $kb

#15


8  

I'm using htop; it's a very good console program similar to Windows Task Manager.

我用htop;这是一个非常好的控制台程序,类似于Windows任务管理器。

#16


8  

Valgrind is amazing if you have the time to run it. valgrind --tool=massif is The Right Solution.

如果你有时间的话,Valgrind很神奇。工具=massif是正确的解决方案。

However, I'm starting to run larger examples, and using valgrind is no longer practical. Is there a way to tell the maximum memory usage (modulo page size and shared pages) of a program?

然而,我开始运行更大的示例,并且使用valgrind不再实用。有没有一种方法可以告诉程序的最大内存使用量(模块页面大小和共享页面)?

On a real unix system, /usr/bin/time -v works. On Linux, however, this does not work.

在真正的unix系统上,/usr/bin/time -v可以工作。然而,在Linux上,这是行不通的。

#17


6  

A good test of the more "real world" usage is to open the application, then run vmstat -s and check the "active memory" statistic. Close the application, wait a few seconds and run vmstat -s again. However much active memory was freed was in evidently in use by the app.

对“真实世界”用法的一个很好的测试是打开应用程序,然后运行vmstat -s并检查“活动内存”统计数据。关闭应用程序,等待几秒钟,再次运行vmstat -s。无论释放了多少活动内存,这款应用程序显然都在使用。

#18


6  

Three more methods to try:

还有三种方法可以尝试:

  1. ps aux --sort pmem
    It sorts the output by %MEM.
  2. 在pmem中,它将输出按%MEM排序。
  3. ps aux | awk '{print $2, $4, $11}' | sort -k2r | head -n 15
    It sorts using pipes.
  4. ps aux | awk '{print $2, $4, $11}' | sort -k2r | head - n15用管道排序。
  5. top -a
    It starts top sorting by %MEM
  6. 开始用%MEM进行top -a排序

(Extracted from here)

(提取)

#19


5  

Below command line will give you the total memory used by the various process running on the Linux machine in MB

下面的命令行将给出在Linux机器上以MB为单位运行的各种进程所使用的总内存

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | awk '{total=total + $1} END {print total}'

#20


4  

Get valgrind. give it your program to run, and it'll tell you plenty about its memory usage.

得到valgrind。给它你的程序运行,它会告诉你很多关于它的内存使用情况。

This would apply only for the case of a program that runs for some time and stops. I don't know if valgrind can get its hands on an already-running process or shouldn't-stop processes such as daemons.

这只适用于运行一段时间并停止的程序。我不知道valgrind是否能够获得已经运行的进程,还是不应该停止诸如守护进程之类的进程。

#21


4  

If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.

如果进程没有消耗太多的内存(或者因为您预期会出现这种情况,或者其他一些命令给出了这种初始指示),并且该进程可以承受短时间的停止,那么您可以尝试使用gcore命令。

gcore <pid>

Check the size of the generated core file to get a good idea how much memory a particular process is using.

检查生成的核心文件的大小,了解特定进程正在使用多少内存。

This won't work too well if process is using hundreds of megs, or gigs, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.

如果进程使用了数百个megs或gid,那么这就不能很好地工作,因为根据I/O性能,核心生成可能需要几秒钟或几分钟来创建。在核心创建过程中,进程被停止(或“冻结”)以防止内存更改。所以要小心。

Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.

还要确保生成核心的挂载点有足够的磁盘空间,并且系统不会对在该特定目录中创建的核心文件产生负面反应。

#22


3  

Edit: this works 100% well only when memory consumption increases

编辑:只有当内存消耗增加时,它才能100%正常工作

If you want to monitor memory usage by given process (or group of processed sharing common name, e.g. google-chrome, you can use my bash-script:

如果您希望通过给定的进程(或一组经过处理的共享公用名称,例如google-chrome)监视内存使用情况,您可以使用我的base -script:

while true; do ps aux | awk ‚{print $5, $11}’ | grep chrome | sort -n > /tmp/a.txt; sleep 1; diff /tmp/{b,a}.txt; mv /tmp/{a,b}.txt; done;

this will continously look for changes and print them.

这将持续地寻找更改并打印它们。

如何度量应用程序或进程的实际内存使用情况?

#23


3  

If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.

如果你想要比分析Valgrind和您的内核是老,你不能使用smap,ps的选项显示居民设置过程(ps - o rss,命令)可以给您用一种快速而合理_aproximation_ non-swapped实际数量的内存使用。

#24


2  

Check shell script to check memory usage by application in linux. Also available on github and in a version without paste and bc.

检查shell脚本,检查应用程序在linux中的内存使用情况。也可以在github上使用,也可以在没有粘贴和bc的版本中使用。

#25


1  

Another vote for here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by valgrind.

这里还有另一个关于valgrind的投票,但是我想补充一点,您可以使用像Alleyoop这样的工具来帮助您解释valgrind生成的结果。

I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)

我一直在使用这两种工具,并且总是使用精简的、无泄漏的代码来自豪地展示它;

#26


1  

While this question seems to be about examining currently running processes, I wanted to see the peak memory used by an application from start to finish. Besides valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.

虽然这个问题似乎是关于检查当前正在运行的进程,但我希望看到应用程序从头到尾使用的峰值内存。除了valgrind之外,您还可以使用tstime,这要简单得多。它度量“high - water”内存使用(RSS和虚拟)。从这个答案。

#27


1  

I would suggest that you use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes and it can also capture to a file.

我建议你用顶部。你可以在这一页找到关于它的一切。它能够为您的流程提供所有必要的KPI,也可以捕获到文件。

#28


0  

Use the in-built 'system monitor' GUI tool available in ubuntu

使用ubuntu内置的“系统监控”GUI工具

#29


0  

Based on answer to a related question.

基于对相关问题的回答。

You may use SNMP to get the memory and cpu usage of a process in a particular device in network :)

您可以使用SNMP在网络中的特定设备中获取进程的内存和cpu使用量:)

Requirements:

要求:

  • the device running the process should have snmp installed and running
  • 运行进程的设备应该安装并运行snmp
  • snmp should be configured to accept requests from where you will run the script below(it may be configured in snmpd.conf)
  • snmp应该配置为接受来自以下脚本的请求(可以在snmp .conf中配置)
  • you should know the process id(pid) of the process you want to monitor
  • 您应该知道要监视的进程的进程id(pid)

Notes:

注:

  • HOST-RESOURCES-MIB::hrSWRunPerfCPU is the number of centi-seconds of the total system's CPU resources consumed by this process. Note that on a multi-processor system, this value may increment by more than one centi-second in one centi-second of real (wall clock) time.

    主机资源- mib:: hrswrunperfperfperfperfperfperfperfperfperfperfperfperfcpu: CPU是指该进程所消耗的系统CPU资源的厘米秒数。注意,在多处理器系统中,这个值可能在真实(挂钟)时间的一厘米秒内增加超过一厘米秒。

  • HOST-RESOURCES-MIB::hrSWRunPerfMem is the total amount of real system memory allocated to this process.

    主机资源- mib:hrSWRunPerfMem是分配给这个进程的实际系统内存总量。

**

* *

Process monitoring script:

过程监控脚本:

**

* *

echo "IP: "
read ip
echo "specfiy pid: "
read pid
echo "interval in seconds:"
read interval

while [ 1 ]
do
    date
    snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfCPU.$pid
    snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfMem.$pid
    sleep $interval;
done

#30


0  

/prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only count those which have been touched.

/prox/xxx/numa_maps提供了一些信息:N0=??N1 = ? ? ?。但是这个结果可能比实际结果要低,因为它只计算那些被触摸过的。

#1


289  

With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:

使用ps或类似的工具,您只能获得进程分配的内存页的数量。这个数字是正确的,但是:

  • does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it

    不反映应用程序实际使用的内存数量,只反映为它保留的内存数量吗

  • can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries

    如果页面被多个线程共享,或者使用动态链接库,会产生误导吗

If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of valgrind is called 'massif':

如果您真的想知道应用程序实际使用了多少内存,您需要在分析器中运行它。例如,valgrind可以让您了解所使用的内存量,更重要的是,了解程序中可能存在的内存泄漏。valgrind的堆剖面仪称为massif:

Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.

Massif是一个堆分析器。它通过获取程序堆的常规快照来执行详细的堆分析。它生成一个图表,显示随着时间的推移堆的使用情况,包括程序中哪些部分负责最多的内存分配。图中还附带了一个文本或HTML文件,其中包含更多信息,用于确定分配的内存最多。Massif运行程序的速度比平常慢了20倍。

As explained in the valgrind documentation, you need to run the program through valgrind:

正如在valgrind文档中解释的,您需要通过valgrind运行程序:

valgrind --tool=massif <executable> <arguments>

Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with valgrind, to be of great help already.

Massif编写内存使用快照的转储(例如Massif .out.12345)。它们(1)为每个快照提供一个内存使用时间线(2),记录程序内存中分配的位置。分析这些文件的一个很好的图形工具是大型可视化工具。但是我发现ms_print这个简单的基于文本的工具已经提供了很大的帮助。

To find memory leaks, use the (default) memcheck tool of valgrind.

要查找内存泄漏,请使用valgrind的(默认)memcheck工具。

#2


207  

Try the pmap command:

试试pmap命令:

sudo pmap -x <process pid>

#3


174  

Hard to tell for sure, but here are two "close" things that can help.

很难确定,但这里有两个“接近”的东西可以帮助你。

$ ps aux 

will give you Virtual Size (VSZ)

将提供虚拟大小(VSZ)

You can also get detailed stats from /proc file-system by going to /proc/$pid/status

您还可以通过/proc/$pid/status从/proc文件系统获得详细的统计信息

The most important is the VmSize, which should be close to what ps aux gives.

最重要的是VmSize,它应该接近ps aux提供的内容。

/proc/19420$ cat status
Name:   firefox
State:  S (sleeping)
Tgid:   19420
Pid:    19420
PPid:   1
TracerPid:  0
Uid:    1000    1000    1000    1000
Gid:    1000    1000    1000    1000
FDSize: 256
Groups: 4 6 20 24 25 29 30 44 46 107 109 115 124 1000 
VmPeak:   222956 kB
VmSize:   212520 kB
VmLck:         0 kB
VmHWM:    127912 kB
VmRSS:    118768 kB
VmData:   170180 kB
VmStk:       228 kB
VmExe:        28 kB
VmLib:     35424 kB
VmPTE:       184 kB
Threads:    8
SigQ:   0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000020001000
SigCgt: 000000018000442f
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
Cpus_allowed:   03
Mems_allowed:   1
voluntary_ctxt_switches:    63422
nonvoluntary_ctxt_switches: 7171

#4


122  

In recent versions of linux, use the smaps subsystem. For example, for a process with a PID of 1234:

在linux的最新版本中,使用smaps子系统。例如,对于PID为1234的进程:

cat /proc/1234/smaps

It will tell you exactly how much memory it is using at that time. More importantly, it will divide the memory into private and shared, so you can tell how much memory your instance of the program is using, without including memory shared between multiple instances of the program.

它会确切地告诉你它当时使用了多少内存。更重要的是,它将内存划分为private和shared,这样您就可以知道程序的实例正在使用多少内存,而不包括程序的多个实例之间共享的内存。

#5


116  

There is no easy way to calculate this. But some people have tried to get some good answers:

没有简单的计算方法。但有些人试图找到一些好的答案:

#6


80  

Use smem, which is an alternative to ps which calculates the USS and PSS per process. What you want is probably the PSS.

使用smem,这是ps的一种替代方法,它计算每个进程的USS和PSS。你想要的可能是PSS。

  • USS - Unique Set Size. This is the amount of unshared memory unique to that process (think of it as U for unique memory). It does not include shared memory. Thus this will under-report the amount of memory a process uses, but is helpful when you want to ignore shared memory.

    独特的设置大小。这是进程唯一的未共享内存的数量(将其视为唯一内存的U)。它不包括共享内存。因此,这会少报进程使用的内存数量,但是当您想要忽略共享内存时,这是很有帮助的。

  • PSS - Proportional Set Size. This is what you want. It adds together the unique memory (USS), along with a proportion of its shared memory divided by the number of other processes sharing that memory. Thus it will give you an accurate representation of how much actual physical memory is being used per process - with shared memory truly represented as shared. Think of the P being for physical memory.

    比例设置大小。这就是你想要的。它将唯一内存(USS)与共享内存的比例除以共享该内存的其他进程的数量相加。因此,它将为您提供每个进程实际使用了多少物理内存的精确表示—共享内存真正表示为共享内存。P表示物理记忆。

How this compares to RSS as reported by ps and other utilties:

与ps和其他用途报告的RSS相比:

  • RSS - Resident Set Size. This is the amount of shared memory plus unshared memory used by each process. If any processes share memory, this will over-report the amount of memory actually used, because the same shared memory will be counted more than once - appearing again in each other process that shares the same memory. Thus it is fairly unreliable, especially when high-memory processes have a lot of forks - which is common in a server, with things like Apache or PHP(fastcgi/FPM) processes.
  • RSS -常驻设置大小。这是每个进程使用的共享内存和非共享内存的数量。如果任何进程共享内存,那么这将高估实际使用的内存数量,因为相同的共享内存将被多次计算——在共享相同内存的另一个进程中再次出现。因此,它是相当不可靠的,特别是当高内存进程有很多分支时——这在服务器中很常见,比如Apache或PHP(fastcgi/FPM)进程。

Notice: smem can also (optionally) output graphs such as pie charts and the like. IMO you don't need any of that. If you just want to use it from the command line like you might use ps -A v, then you don't need to install the python-matplotlib recommended dependency.

注意:smem还可以(可选地)输出图形,如饼图等。在我看来,你什么都不需要。如果您只想从命令行使用它,就像使用ps -A - v一样,那么您不需要安装python-matplotlib推荐的依赖项。

#7


43  

What about time ?

时间呢?

Not the Bash builtin time but the one you can find with which time, for example /usr/bin/time

不是Bash构建时间,而是您可以找到的时间,例如/usr/bin/time

Here's what it covers, on a simple ls :

这是它所涵盖的,在一个简单的ls:

$ /usr/bin/time --verbose ls
(...)
Command being timed: "ls"
User time (seconds): 0.00
System time (seconds): 0.00
Percent of CPU this job got: 0%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2372
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 121
Voluntary context switches: 2
Involuntary context switches: 9
Swaps: 0
File system inputs: 256
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0

#8


37  

This is an excellent summary of the tools and problems: archive.org link

这是一个工具和问题的优秀总结:archive.org链接

I'll quote it, so that more devs will actually read it.

我将引用它,以便更多的开发人员真正阅读它。

If you want to analyse memory usage of the whole system or to thoroughly analyse memory usage of one application (not just its heap usage), use exmap. For whole system analysis, find processes with the highest effective usage, they take the most memory in practice, find processes with the highest writable usage, they create the most data (and therefore possibly leak or are very ineffective in their data usage). Select such application and analyse its mappings in the second listview. See exmap section for more details. Also use xrestop to check high usage of X resources, especially if the process of the X server takes a lot of memory. See xrestop section for details.

如果您想分析整个系统的内存使用情况,或者想彻底分析一个应用程序的内存使用情况(不仅仅是它的堆使用情况),请使用exmap。对于整个系统的分析,找出最有效使用的流程,他们在实践中使用最多的内存,找到具有最高可写操作的流程,他们创建最多的数据(因此可能会泄漏或在数据使用中非常低效)。选择这样的应用程序并在第二个listview中分析它的映射。有关更多细节,请参见exmap部分。还可以使用xrestop检查X资源的高使用率,特别是当X服务器的进程占用大量内存时。有关详细信息,请参阅xrestop部分。

If you want to detect leaks, use valgrind or possibly kmtrace.

如果您想要检测泄漏,请使用valgrind或kmtrace。

If you want to analyse heap (malloc etc.) usage of an application, either run it in memprof or with kmtrace, profile the application and search the function call tree for biggest allocations. See their sections for more details.

如果您想分析应用程序的堆(malloc等)使用,可以在memprof中运行它,也可以使用kmtrace,对应用程序进行概要分析,并搜索函数调用树以获得最大的分配。更多细节请参见他们的章节。

#9


32  

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' |cut -d "" -f2 | cut -d "-" -f1

Use this as root and you can get a clear output for memory usage by each process.

OUTPUT EXAMPLE:

     0.00 Mb COMMAND 
  1288.57 Mb /usr/lib/firefox
   821.68 Mb /usr/lib/chromium/chromium 
   762.82 Mb /usr/lib/chromium/chromium 
   588.36 Mb /usr/sbin/mysqld 
   547.55 Mb /usr/lib/chromium/chromium 
   523.92 Mb /usr/lib/tracker/tracker
   476.59 Mb /usr/lib/chromium/chromium 
   446.41 Mb /usr/bin/gnome
   421.62 Mb /usr/sbin/libvirtd 
   405.11 Mb /usr/lib/chromium/chromium 
   302.60 Mb /usr/lib/chromium/chromium 
   291.46 Mb /usr/lib/chromium/chromium 
   284.56 Mb /usr/lib/chromium/chromium 
   238.93 Mb /usr/lib/tracker/tracker
   223.21 Mb /usr/lib/chromium/chromium 
   197.99 Mb /usr/lib/chromium/chromium 
   194.07 Mb conky 
   191.92 Mb /usr/lib/chromium/chromium 
   190.72 Mb /usr/bin/mongod 
   169.06 Mb /usr/lib/chromium/chromium 
   155.11 Mb /usr/bin/gnome
   136.02 Mb /usr/lib/chromium/chromium 
   125.98 Mb /usr/lib/chromium/chromium 
   103.98 Mb /usr/lib/chromium/chromium 
    93.22 Mb /usr/lib/tracker/tracker
    89.21 Mb /usr/lib/gnome
    80.61 Mb /usr/bin/gnome
    77.73 Mb /usr/lib/evolution/evolution
    76.09 Mb /usr/lib/evolution/evolution
    72.21 Mb /usr/lib/gnome
    69.40 Mb /usr/lib/evolution/evolution
    68.84 Mb nautilus
    68.08 Mb zeitgeist
    60.97 Mb /usr/lib/tracker/tracker
    59.65 Mb /usr/lib/evolution/evolution
    57.68 Mb apt
    55.23 Mb /usr/lib/gnome
    53.61 Mb /usr/lib/evolution/evolution
    53.07 Mb /usr/lib/gnome
    52.83 Mb /usr/lib/gnome
    51.02 Mb /usr/lib/udisks2/udisksd 
    50.77 Mb /usr/lib/evolution/evolution
    50.53 Mb /usr/lib/gnome
    50.45 Mb /usr/lib/gvfs/gvfs
    50.36 Mb /usr/lib/packagekit/packagekitd 
    50.14 Mb /usr/lib/gvfs/gvfs
    48.95 Mb /usr/bin/Xwayland :1024 
    46.21 Mb /usr/bin/gnome
    42.43 Mb /usr/bin/zeitgeist
    42.29 Mb /usr/lib/gnome
    41.97 Mb /usr/lib/gnome
    41.64 Mb /usr/lib/gvfs/gvfsd
    41.63 Mb /usr/lib/gvfs/gvfsd
    41.55 Mb /usr/lib/gvfs/gvfsd
    41.48 Mb /usr/lib/gvfs/gvfsd
    39.87 Mb /usr/bin/python /usr/bin/chrome
    37.45 Mb /usr/lib/xorg/Xorg vt2 
    36.62 Mb /usr/sbin/NetworkManager 
    35.63 Mb /usr/lib/caribou/caribou 
    34.79 Mb /usr/lib/tracker/tracker
    33.88 Mb /usr/sbin/ModemManager 
    33.77 Mb /usr/lib/gnome
    33.61 Mb /usr/lib/upower/upowerd 
    33.53 Mb /usr/sbin/gdm3 
    33.37 Mb /usr/lib/gvfs/gvfsd
    33.36 Mb /usr/lib/gvfs/gvfs
    33.23 Mb /usr/lib/gvfs/gvfs
    33.15 Mb /usr/lib/at
    33.15 Mb /usr/lib/at
    30.03 Mb /usr/lib/colord/colord 
    29.62 Mb /usr/lib/apt/methods/https 
    28.06 Mb /usr/lib/zeitgeist/zeitgeist
    27.29 Mb /usr/lib/policykit
    25.55 Mb /usr/lib/gvfs/gvfs
    25.55 Mb /usr/lib/gvfs/gvfs
    25.23 Mb /usr/lib/accountsservice/accounts
    25.18 Mb /usr/lib/gvfs/gvfsd 
    25.15 Mb /usr/lib/gvfs/gvfs
    25.15 Mb /usr/lib/gvfs/gvfs
    25.12 Mb /usr/lib/gvfs/gvfs
    25.10 Mb /usr/lib/gnome
    25.10 Mb /usr/lib/gnome
    25.07 Mb /usr/lib/gvfs/gvfsd 
    24.99 Mb /usr/lib/gvfs/gvfs
    23.26 Mb /usr/lib/chromium/chromium 
    22.09 Mb /usr/bin/pulseaudio 
    19.01 Mb /usr/bin/pulseaudio 
    18.62 Mb (sd
    18.46 Mb (sd
    18.30 Mb /sbin/init 
    18.17 Mb /usr/sbin/rsyslogd 
    17.50 Mb gdm
    17.42 Mb gdm
    17.09 Mb /usr/lib/dconf/dconf
    17.09 Mb /usr/lib/at
    17.06 Mb /usr/lib/gvfs/gvfsd
    16.98 Mb /usr/lib/at
    16.91 Mb /usr/lib/gdm3/gdm
    16.86 Mb /usr/lib/gvfs/gvfsd
    16.86 Mb /usr/lib/gdm3/gdm
    16.85 Mb /usr/lib/dconf/dconf
    16.85 Mb /usr/lib/dconf/dconf
    16.73 Mb /usr/lib/rtkit/rtkit
    16.69 Mb /lib/systemd/systemd
    13.13 Mb /usr/lib/chromium/chromium 
    13.13 Mb /usr/lib/chromium/chromium 
    10.92 Mb anydesk 
     8.54 Mb /sbin/lvmetad 
     7.43 Mb /usr/sbin/apache2 
     6.82 Mb /usr/sbin/apache2 
     6.77 Mb /usr/sbin/apache2 
     6.73 Mb /usr/sbin/apache2 
     6.66 Mb /usr/sbin/apache2 
     6.64 Mb /usr/sbin/apache2 
     6.63 Mb /usr/sbin/apache2 
     6.62 Mb /usr/sbin/apache2 
     6.51 Mb /usr/sbin/apache2 
     6.25 Mb /usr/sbin/apache2 
     6.22 Mb /usr/sbin/apache2 
     3.92 Mb bash 
     3.14 Mb bash 
     2.97 Mb bash 
     2.95 Mb bash 
     2.93 Mb bash 
     2.91 Mb bash 
     2.86 Mb bash 
     2.86 Mb bash 
     2.86 Mb bash 
     2.84 Mb bash 
     2.84 Mb bash 
     2.45 Mb /lib/systemd/systemd
     2.30 Mb (sd
     2.28 Mb /usr/bin/dbus
     1.84 Mb /usr/bin/dbus
     1.46 Mb ps 
     1.21 Mb openvpn hackthebox.ovpn 
     1.16 Mb /sbin/dhclient 
     1.16 Mb /sbin/dhclient 
     1.09 Mb /lib/systemd/systemd 
     0.98 Mb /sbin/mount.ntfs /dev/sda3 /media/n0bit4/Data 
     0.97 Mb /lib/systemd/systemd 
     0.96 Mb /lib/systemd/systemd 
     0.89 Mb /usr/sbin/smartd 
     0.77 Mb /usr/bin/dbus
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.76 Mb su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.75 Mb sudo su 
     0.74 Mb /usr/bin/dbus
     0.71 Mb /usr/lib/apt/methods/http 
     0.68 Mb /bin/bash /usr/bin/mysqld_safe 
     0.68 Mb /sbin/wpa_supplicant 
     0.66 Mb /usr/bin/dbus
     0.61 Mb /lib/systemd/systemd
     0.54 Mb /usr/bin/dbus
     0.46 Mb /usr/sbin/cron 
     0.45 Mb /usr/sbin/irqbalance 
     0.43 Mb logger 
     0.41 Mb awk { hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" } 
     0.40 Mb /usr/bin/ssh
     0.34 Mb /usr/lib/chromium/chrome
     0.32 Mb cut 
     0.32 Mb cut 
     0.00 Mb [kthreadd] 
     0.00 Mb [ksoftirqd/0] 
     0.00 Mb [kworker/0:0H] 
     0.00 Mb [rcu_sched] 
     0.00 Mb [rcu_bh] 
     0.00 Mb [migration/0] 
     0.00 Mb [lru
     0.00 Mb [watchdog/0] 
     0.00 Mb [cpuhp/0] 
     0.00 Mb [cpuhp/1] 
     0.00 Mb [watchdog/1] 
     0.00 Mb [migration/1] 
     0.00 Mb [ksoftirqd/1] 
     0.00 Mb [kworker/1:0H] 
     0.00 Mb [cpuhp/2] 
     0.00 Mb [watchdog/2] 
     0.00 Mb [migration/2] 
     0.00 Mb [ksoftirqd/2] 
     0.00 Mb [kworker/2:0H] 
     0.00 Mb [cpuhp/3] 
     0.00 Mb [watchdog/3] 
     0.00 Mb [migration/3] 
     0.00 Mb [ksoftirqd/3] 
     0.00 Mb [kworker/3:0H] 
     0.00 Mb [kdevtmpfs] 
     0.00 Mb [netns] 
     0.00 Mb [khungtaskd] 
     0.00 Mb [oom_reaper] 
     0.00 Mb [writeback] 
     0.00 Mb [kcompactd0] 
     0.00 Mb [ksmd] 
     0.00 Mb [khugepaged] 
     0.00 Mb [crypto] 
     0.00 Mb [kintegrityd] 
     0.00 Mb [bioset] 
     0.00 Mb [kblockd] 
     0.00 Mb [devfreq_wq] 
     0.00 Mb [watchdogd] 
     0.00 Mb [kswapd0] 
     0.00 Mb [vmstat] 
     0.00 Mb [kthrotld] 
     0.00 Mb [ipv6_addrconf] 
     0.00 Mb [acpi_thermal_pm] 
     0.00 Mb [ata_sff] 
     0.00 Mb [scsi_eh_0] 
     0.00 Mb [scsi_tmf_0] 
     0.00 Mb [scsi_eh_1] 
     0.00 Mb [scsi_tmf_1] 
     0.00 Mb [scsi_eh_2] 
     0.00 Mb [scsi_tmf_2] 
     0.00 Mb [scsi_eh_3] 
     0.00 Mb [scsi_tmf_3] 
     0.00 Mb [scsi_eh_4] 
     0.00 Mb [scsi_tmf_4] 
     0.00 Mb [scsi_eh_5] 
     0.00 Mb [scsi_tmf_5] 
     0.00 Mb [bioset] 
     0.00 Mb [kworker/1:1H] 
     0.00 Mb [kworker/3:1H] 
     0.00 Mb [kworker/0:1H] 
     0.00 Mb [kdmflush] 
     0.00 Mb [bioset] 
     0.00 Mb [kdmflush] 
     0.00 Mb [bioset] 
     0.00 Mb [jbd2/sda5
     0.00 Mb [ext4
     0.00 Mb [kworker/2:1H] 
     0.00 Mb [kauditd] 
     0.00 Mb [bioset] 
     0.00 Mb [drbd
     0.00 Mb [irq/27
     0.00 Mb [i915/signal:0] 
     0.00 Mb [i915/signal:1] 
     0.00 Mb [i915/signal:2] 
     0.00 Mb [ttm_swap] 
     0.00 Mb [cfg80211] 
     0.00 Mb [kworker/u17:0] 
     0.00 Mb [hci0] 
     0.00 Mb [hci0] 
     0.00 Mb [kworker/u17:1] 
     0.00 Mb [iprt
     0.00 Mb [iprt
     0.00 Mb [kworker/1:0] 
     0.00 Mb [kworker/3:0] 
     0.00 Mb [kworker/0:0] 
     0.00 Mb [kworker/2:0] 
     0.00 Mb [kworker/u16:0] 
     0.00 Mb [kworker/u16:2] 
     0.00 Mb [kworker/3:2] 
     0.00 Mb [kworker/2:1] 
     0.00 Mb [kworker/1:2] 
     0.00 Mb [kworker/0:2] 
     0.00 Mb [kworker/2:2] 
     0.00 Mb [kworker/0:1] 
     0.00 Mb [scsi_eh_6] 
     0.00 Mb [scsi_tmf_6] 
     0.00 Mb [usb
     0.00 Mb [bioset] 
     0.00 Mb [kworker/3:1] 
     0.00 Mb [kworker/u16:1] 

#10


22  

Beside the solutions listed in thy answers, you can use the Linux command "top"; It provides a dynamic real-time view of the running system, it gives the CPU and Memory usage, for the whole system along with for every program, in percentage:

除了您的答案中列出的解决方案之外,您还可以使用Linux命令“top”;它提供了运行系统的动态实时视图,它提供了CPU和内存使用情况,包括整个系统和每个程序的百分比:

top

to filter by a program pid:

通过程序pid进行过滤:

top -p <PID>

to filter by a program name:

按程序名过滤:

top | grep <PROCESS NAME>

"top" provides also some fields such as:

“top”还提供了如下一些领域:

VIRT -- Virtual Image (kb) :The total amount of virtual memory used by the task

VIRT——虚拟映像(kb):任务使用的虚拟内存总量。

RES -- Resident size (kb): The non-swapped physical memory a task has used ; RES = CODE + DATA.

RES——驻留大小(kb):任务使用的非交换物理内存;RES =代码+数据。

DATA -- Data+Stack size (kb): The amount of physical memory devoted to other than executable code, also known as the 'data resident set' size or DRS.

数据——数据+堆栈大小(kb):用于除可执行代码之外的物理内存的数量,也称为“数据驻留集”大小或DRS。

SHR -- Shared Mem size (kb): The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.

SHR——共享Mem大小(kb):任务使用的共享内存数量。它仅仅反映了可能与其他进程共享的内存。

Reference here.

参考这里。

#11


17  

There isn't a single answer for this because you can't pin point precisely the amount of memory a process uses. Most processes under linux use shared libraries. For instance, let's say you want to calculate memory usage for the 'ls' process. Do you count only the memory used by the executable 'ls' ( if you could isolate it) ? How about libc? Or all these other libs that are required to run 'ls'?

对此没有一个单一的答案,因为您无法精确地确定进程使用的内存数量。linux下的大多数进程使用共享库。例如,假设您希望计算“ls”进程的内存使用情况。您是否只计算可执行的“ls”所使用的内存(如果可以隔离它的话)?libc怎么样?或者运行“ls”所需的所有其他lib ?

linux-gate.so.1 =>  (0x00ccb000)
librt.so.1 => /lib/librt.so.1 (0x06bc7000)
libacl.so.1 => /lib/libacl.so.1 (0x00230000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00162000)
libc.so.6 => /lib/libc.so.6 (0x00b40000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00cb4000)
/lib/ld-linux.so.2 (0x00b1d000)
libattr.so.1 => /lib/libattr.so.1 (0x00229000)
libdl.so.2 => /lib/libdl.so.2 (0x00cae000)
libsepol.so.1 => /lib/libsepol.so.1 (0x0011a000)

You could argue that they are shared by other processes, but 'ls' can't be run on the system without them being loaded.

您可能会说它们是由其他进程共享的,但是如果没有加载它们,“ls”就不能在系统上运行。

Also, if you need to know how much memory a process needs in order to do capacity planning, you have to calculate how much each additional copy of the process uses. I think /proc/PID/status might give you enough info of the memory usage AT a single time. On the other hand, valgrind will give you a better profile of the memory usage throughout the lifetime of the program

另外,如果您需要知道一个进程需要多少内存才能进行容量规划,那么您必须计算每个进程使用的额外副本的数量。我认为/proc/PID/status可以在一次中提供足够的内存使用信息。另一方面,valgrind可以让您更好地了解程序整个生命周期内的内存使用情况

#12


14  

If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.

如果您的代码是在C或c++中,那么您可能可以使用getrusage(),它返回关于进程的内存和时间使用的各种统计信息。

Not all platforms support this though and will return 0 values for the memory-use options.

但并非所有平台都支持这一点,并将为内存使用选项返回0值。

Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).

相反,您可以查看在/proc/[pid]/statm中创建的虚拟文件(其中[pid]被您的进程id所替代)。

This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file.

这个文件看起来像一个包含7个整数的文本文件。您可能对这个文件中的第一个(所有内存使用)和第六个(数据内存使用)号最感兴趣。

#13


11  

Valgrind can show detailed information but it slows down the target application significantly, and most of the time it changes the behavior of the app.
Exmap was something I didn't know yet, but it seems that you need a kernel module to get the information, which can be an obstacle.

Valgrind可以显示详细的信息,但它会显著降低目标应用程序的速度,而且大多数时候它会改变app的行为。Exmap是我还不知道的,但是看起来你需要一个内核模块来获取信息,这可能是一个障碍。

I assume what everyone wants to know WRT "memory usage" is the following...
In linux, the amount of physical memory a single process might use can be roughly divided into following categories.

我假设每个人都想知道WRT的“内存使用”是这样的……在linux中,单个进程可能使用的物理内存可以大致分为以下几类。

  • M.a anonymous mapped memory

    M。一个匿名内存映射

    • .p private
      • .d dirty == malloc/mmapped heap and stack allocated and written memory
      • .d dirty == = malloc/ mmapping堆和分配和写入内存的堆栈
      • .c clean == malloc/mmapped heap and stack memory once allocated, written, then freed, but not reclaimed yet
      • .c clean == malloc/ mmapping堆和堆栈内存一旦分配、写入、释放,但尚未回收
    • .p private .d dirty == malloc/mmap堆和堆栈分配和写入内存。c clean == malloc/mmap堆和堆栈内存,一旦分配,写入,然后释放,但还没有被回收。
    • .s shared
      • .d dirty == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
      • .d dirty == = malloc/mmaped堆可以在进程之间进行写时复制和共享(编辑)
      • .c clean == malloc/mmaped heap could get copy-on-write and shared among processes (edited)
      • .c clean == malloc/mmaped堆可以在进程(编辑)中获得复制和共享
    • .s共享的.d dirty == = malloc/mmaped heap可以在进程间进行写时复制和共享(编辑).c clean == malloc/mmaped heap可以在进程间进行写时复制和共享(编辑)
  • M.n named mapped memory

    M。n指定映射的内存

    • .p private
      • .d dirty == file mmapped written memory private
      • .d dirty ==文件mmapping写入内存私有
      • .c clean == mapped program/library text private mapped
      • .c clean ==映射程序/库文本私有映射
    • .p private .d dirty ==文件mmaps写入内存私有.c clean =映射程序/库文本私有映射
    • .s shared
      • .d dirty == file mmapped written memory shared
      • .d dirty ==文件mmaps写入内存共享
      • .c clean == mapped library text shared mapped
      • .c clean ==映射的库文本共享映射。
    • .s共享的。d dirty == =文件mmapping写入内存共享。c clean =映射库文本共享映射

Utility included in Android called showmap is quite useful

Android中包含的名为showmap的实用程序非常有用

virtual                    shared   shared   private  private
size     RSS      PSS      clean    dirty    clean    dirty    object
-------- -------- -------- -------- -------- -------- -------- ------------------------------
       4        0        0        0        0        0        0 0:00 0                  [vsyscall]
       4        4        0        4        0        0        0                         [vdso]
      88       28       28        0        0        4       24                         [stack]
      12       12       12        0        0        0       12 7909                    /lib/ld-2.11.1.so
      12        4        4        0        0        0        4 89529                   /usr/lib/locale/en_US.utf8/LC_IDENTIFICATION
      28        0        0        0        0        0        0 86661                   /usr/lib/gconv/gconv-modules.cache
       4        0        0        0        0        0        0 87660                   /usr/lib/locale/en_US.utf8/LC_MEASUREMENT
       4        0        0        0        0        0        0 89528                   /usr/lib/locale/en_US.utf8/LC_TELEPHONE
       4        0        0        0        0        0        0 89527                   /usr/lib/locale/en_US.utf8/LC_ADDRESS
       4        0        0        0        0        0        0 87717                   /usr/lib/locale/en_US.utf8/LC_NAME
       4        0        0        0        0        0        0 87873                   /usr/lib/locale/en_US.utf8/LC_PAPER
       4        0        0        0        0        0        0 13879                   /usr/lib/locale/en_US.utf8/LC_MESSAGES/SYS_LC_MESSAGES
       4        0        0        0        0        0        0 89526                   /usr/lib/locale/en_US.utf8/LC_MONETARY
       4        0        0        0        0        0        0 89525                   /usr/lib/locale/en_US.utf8/LC_TIME
       4        0        0        0        0        0        0 11378                   /usr/lib/locale/en_US.utf8/LC_NUMERIC
    1156        8        8        0        0        4        4 11372                   /usr/lib/locale/en_US.utf8/LC_COLLATE
     252        0        0        0        0        0        0 11321                   /usr/lib/locale/en_US.utf8/LC_CTYPE
     128       52        1       52        0        0        0 7909                    /lib/ld-2.11.1.so
    2316       32       11       24        0        0        8 7986                    /lib/libncurses.so.5.7
    2064        8        4        4        0        0        4 7947                    /lib/libdl-2.11.1.so
    3596      472       46      440        0        4       28 7933                    /lib/libc-2.11.1.so
    2084        4        0        4        0        0        0 7995                    /lib/libnss_compat-2.11.1.so
    2152        4        0        4        0        0        0 7993                    /lib/libnsl-2.11.1.so
    2092        0        0        0        0        0        0 8009                    /lib/libnss_nis-2.11.1.so
    2100        0        0        0        0        0        0 7999                    /lib/libnss_files-2.11.1.so
    3752     2736     2736        0        0      864     1872                         [heap]
      24       24       24        0        0        0       24 [anon]
     916      616      131      584        0        0       32                         /bin/bash
-------- -------- -------- -------- -------- -------- -------- ------------------------------
   22816     4004     3005     1116        0      876     2012 TOTAL

#14


8  

#!/bin/ksh
#
# Returns total memory used by process $1 in kb.
#
# See /proc/NNNN/smaps if you want to do something
# more interesting.
#

IFS=$'\n'

for line in $(</proc/$1/smaps)
do
   [[ $line =~ ^Size:\s+(\S+) ]] && ((kb += ${.sh.match[1]}))
done

print $kb

#15


8  

I'm using htop; it's a very good console program similar to Windows Task Manager.

我用htop;这是一个非常好的控制台程序,类似于Windows任务管理器。

#16


8  

Valgrind is amazing if you have the time to run it. valgrind --tool=massif is The Right Solution.

如果你有时间的话,Valgrind很神奇。工具=massif是正确的解决方案。

However, I'm starting to run larger examples, and using valgrind is no longer practical. Is there a way to tell the maximum memory usage (modulo page size and shared pages) of a program?

然而,我开始运行更大的示例,并且使用valgrind不再实用。有没有一种方法可以告诉程序的最大内存使用量(模块页面大小和共享页面)?

On a real unix system, /usr/bin/time -v works. On Linux, however, this does not work.

在真正的unix系统上,/usr/bin/time -v可以工作。然而,在Linux上,这是行不通的。

#17


6  

A good test of the more "real world" usage is to open the application, then run vmstat -s and check the "active memory" statistic. Close the application, wait a few seconds and run vmstat -s again. However much active memory was freed was in evidently in use by the app.

对“真实世界”用法的一个很好的测试是打开应用程序,然后运行vmstat -s并检查“活动内存”统计数据。关闭应用程序,等待几秒钟,再次运行vmstat -s。无论释放了多少活动内存,这款应用程序显然都在使用。

#18


6  

Three more methods to try:

还有三种方法可以尝试:

  1. ps aux --sort pmem
    It sorts the output by %MEM.
  2. 在pmem中,它将输出按%MEM排序。
  3. ps aux | awk '{print $2, $4, $11}' | sort -k2r | head -n 15
    It sorts using pipes.
  4. ps aux | awk '{print $2, $4, $11}' | sort -k2r | head - n15用管道排序。
  5. top -a
    It starts top sorting by %MEM
  6. 开始用%MEM进行top -a排序

(Extracted from here)

(提取)

#19


5  

Below command line will give you the total memory used by the various process running on the Linux machine in MB

下面的命令行将给出在Linux机器上以MB为单位运行的各种进程所使用的总内存

ps -eo size,pid,user,command --sort -size | awk '{ hr=$1/1024 ; printf("%13.2f Mb ",hr) } { for ( x=4 ; x<=NF ; x++ ) { printf("%s ",$x) } print "" }' | awk '{total=total + $1} END {print total}'

#20


4  

Get valgrind. give it your program to run, and it'll tell you plenty about its memory usage.

得到valgrind。给它你的程序运行,它会告诉你很多关于它的内存使用情况。

This would apply only for the case of a program that runs for some time and stops. I don't know if valgrind can get its hands on an already-running process or shouldn't-stop processes such as daemons.

这只适用于运行一段时间并停止的程序。我不知道valgrind是否能够获得已经运行的进程,还是不应该停止诸如守护进程之类的进程。

#21


4  

If the process is not using up too much memory (either because you expect this to be the case, or some other command has given this initial indication), and the process can withstand being stopped for a short period of time, you can try to use the gcore command.

如果进程没有消耗太多的内存(或者因为您预期会出现这种情况,或者其他一些命令给出了这种初始指示),并且该进程可以承受短时间的停止,那么您可以尝试使用gcore命令。

gcore <pid>

Check the size of the generated core file to get a good idea how much memory a particular process is using.

检查生成的核心文件的大小,了解特定进程正在使用多少内存。

This won't work too well if process is using hundreds of megs, or gigs, as the core generation could take several seconds or minutes to be created depending on I/O performance. During the core creation the process is stopped (or "frozen") to prevent memory changes. So be careful.

如果进程使用了数百个megs或gid,那么这就不能很好地工作,因为根据I/O性能,核心生成可能需要几秒钟或几分钟来创建。在核心创建过程中,进程被停止(或“冻结”)以防止内存更改。所以要小心。

Also make sure the mount point where the core is generated has plenty of disk space and that the system will not react negatively to the core file being created in that particular directory.

还要确保生成核心的挂载点有足够的磁盘空间,并且系统不会对在该特定目录中创建的核心文件产生负面反应。

#22


3  

Edit: this works 100% well only when memory consumption increases

编辑:只有当内存消耗增加时,它才能100%正常工作

If you want to monitor memory usage by given process (or group of processed sharing common name, e.g. google-chrome, you can use my bash-script:

如果您希望通过给定的进程(或一组经过处理的共享公用名称,例如google-chrome)监视内存使用情况,您可以使用我的base -script:

while true; do ps aux | awk ‚{print $5, $11}’ | grep chrome | sort -n > /tmp/a.txt; sleep 1; diff /tmp/{b,a}.txt; mv /tmp/{a,b}.txt; done;

this will continously look for changes and print them.

这将持续地寻找更改并打印它们。

如何度量应用程序或进程的实际内存使用情况?

#23


3  

If you want something quicker than profiling with Valgrind and your kernel is older and you can't use smaps, a ps with the options to show the resident set of the process (with ps -o rss,command) can give you a quick and reasonable _aproximation_ of the real amount of non-swapped memory being used.

如果你想要比分析Valgrind和您的内核是老,你不能使用smap,ps的选项显示居民设置过程(ps - o rss,命令)可以给您用一种快速而合理_aproximation_ non-swapped实际数量的内存使用。

#24


2  

Check shell script to check memory usage by application in linux. Also available on github and in a version without paste and bc.

检查shell脚本,检查应用程序在linux中的内存使用情况。也可以在github上使用,也可以在没有粘贴和bc的版本中使用。

#25


1  

Another vote for here, but I would like to add that you can use a tool like Alleyoop to help you interpret the results generated by valgrind.

这里还有另一个关于valgrind的投票,但是我想补充一点,您可以使用像Alleyoop这样的工具来帮助您解释valgrind生成的结果。

I use the two tools all the time and always have lean, non-leaky code to proudly show for it ;)

我一直在使用这两种工具,并且总是使用精简的、无泄漏的代码来自豪地展示它;

#26


1  

While this question seems to be about examining currently running processes, I wanted to see the peak memory used by an application from start to finish. Besides valgrind, you can use tstime, which is much simpler. It measures the "highwater" memory usage (RSS and virtual). From this answer.

虽然这个问题似乎是关于检查当前正在运行的进程,但我希望看到应用程序从头到尾使用的峰值内存。除了valgrind之外,您还可以使用tstime,这要简单得多。它度量“high - water”内存使用(RSS和虚拟)。从这个答案。

#27


1  

I would suggest that you use atop. You can find everything about it on this page. It is capable of providing all the necessary KPI for your processes and it can also capture to a file.

我建议你用顶部。你可以在这一页找到关于它的一切。它能够为您的流程提供所有必要的KPI,也可以捕获到文件。

#28


0  

Use the in-built 'system monitor' GUI tool available in ubuntu

使用ubuntu内置的“系统监控”GUI工具

#29


0  

Based on answer to a related question.

基于对相关问题的回答。

You may use SNMP to get the memory and cpu usage of a process in a particular device in network :)

您可以使用SNMP在网络中的特定设备中获取进程的内存和cpu使用量:)

Requirements:

要求:

  • the device running the process should have snmp installed and running
  • 运行进程的设备应该安装并运行snmp
  • snmp should be configured to accept requests from where you will run the script below(it may be configured in snmpd.conf)
  • snmp应该配置为接受来自以下脚本的请求(可以在snmp .conf中配置)
  • you should know the process id(pid) of the process you want to monitor
  • 您应该知道要监视的进程的进程id(pid)

Notes:

注:

  • HOST-RESOURCES-MIB::hrSWRunPerfCPU is the number of centi-seconds of the total system's CPU resources consumed by this process. Note that on a multi-processor system, this value may increment by more than one centi-second in one centi-second of real (wall clock) time.

    主机资源- mib:: hrswrunperfperfperfperfperfperfperfperfperfperfperfperfcpu: CPU是指该进程所消耗的系统CPU资源的厘米秒数。注意,在多处理器系统中,这个值可能在真实(挂钟)时间的一厘米秒内增加超过一厘米秒。

  • HOST-RESOURCES-MIB::hrSWRunPerfMem is the total amount of real system memory allocated to this process.

    主机资源- mib:hrSWRunPerfMem是分配给这个进程的实际系统内存总量。

**

* *

Process monitoring script:

过程监控脚本:

**

* *

echo "IP: "
read ip
echo "specfiy pid: "
read pid
echo "interval in seconds:"
read interval

while [ 1 ]
do
    date
    snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfCPU.$pid
    snmpget -v2c -c public $ip HOST-RESOURCES-MIB::hrSWRunPerfMem.$pid
    sleep $interval;
done

#30


0  

/prox/xxx/numa_maps gives some info there: N0=??? N1=???. But this result might be lower than the actual result, as it only count those which have been touched.

/prox/xxx/numa_maps提供了一些信息:N0=??N1 = ? ? ?。但是这个结果可能比实际结果要低,因为它只计算那些被触摸过的。