如何可靠地测量进程所使用的网络带宽

时间:2021-04-06 21:08:51

I have developed an application and I want to measure how much network bandwidth it consumes in some typical test cases.

我已经开发了一个应用程序,我想测量在一些典型的测试用例中它消耗了多少网络带宽。

I found a few applications like nethog etc. however I am not sure how accurate its reports are!

我发现了一些应用,比如nethog等等,但是我不确定它的报告有多准确!

I would like some way to very accurately measure the same as the results need to go into a report for a conference.

我希望有一些方法能够非常准确地测量结果,因为结果需要进入一个会议的报告。

I'm willing to write customized solutions for the same if someone guides me how to!

如果有人指导我怎么做,我也愿意为自己编写定制的解决方案!

I want something where I can run the monitoring program and my target application to get network usage statistics - cumulative bytes sent/rcvd. .. maximum usage and average usage etc

我想要一些我可以运行监视程序和我的目标应用程序来获得网络使用统计数据的东西——发送/rcvd的累计字节。最大使用量和平均使用量等

5 个解决方案

#1


4  

Can the application be isolated on a single machine?
Does anything else have to run on the system?

应用程序可以在一台机器上进行隔离吗?系统上还需要运行其他什么吗?

If a system can be dedicated this way, periodically grab the last line from /proc/net/netstat and subtract the corresponding values of InOctets and OutOctets.

如果一个系统可以以这种方式进行专用,则定期从/proc/net/netstat获取最后一行,并减去相对应的预防接种和预防接种的值。

This system, Fedora 15, shows this after 23 days of uptime:

这个系统Fedora 15在运行23天后显示:

TcpExt: SyncookiesSent SyncookiesRecv SyncookiesFailed EmbryonicRsts PruneCalled RcvPruned OfoPruned OutOfWindowIcmps LockDroppedIcmps ArpFilter TW TWRecycled TWKilled PAWSPassive PAWSActive PAWSEstab DelayedACKs DelayedACKLocked DelayedACKLost ListenOverflows ListenDrops TCPPrequeued TCPDirectCopyFromBacklog TCPDirectCopyFromPrequeue TCPPrequeueDropped TCPHPHits TCPHPHitsToUser TCPPureAcks TCPHPAcks TCPRenoRecovery TCPSackRecovery TCPSACKReneging TCPFACKReorder TCPSACKReorder TCPRenoReorder TCPTSReorder TCPFullUndo TCPPartialUndo TCPDSACKUndo TCPLossUndo TCPLoss TCPLostRetransmit TCPRenoFailures TCPSackFailures TCPLossFailures TCPFastRetrans TCPForwardRetrans TCPSlowStartRetrans TCPTimeouts TCPRenoRecoveryFail TCPSackRecoveryFail TCPSchedulerFailed TCPRcvCollapsed TCPDSACKOldSent TCPDSACKOfoSent TCPDSACKRecv TCPDSACKOfoRecv TCPAbortOnSyn TCPAbortOnData TCPAbortOnClose TCPAbortOnMemory TCPAbortOnTimeout TCPAbortOnLinger TCPAbortFailed TCPMemoryPressures TCPSACKDiscard TCPDSACKIgnoredOld TCPDSACKIgnoredNoUndo TCPSpuriousRTOs TCPMD6NotFound TCPMD5Unexpected TCPSackShifted TCPSackMerged TCPSackShiftFallback TCPBacklogDrop TCPMinTTLDrop TCPDeferAcceptDrop IPReversePathFilter TCPTimeWaitOverflow TCPReqQFullDoCookies TCPReqQFullDrop
TcpExt: 0 0 0 0 0 0 0 0 10 0 67116 0 0 0 0 8 117271 53 18860 0 0 102295 23352211 87967244 0 16861098 118195 893786 881659 0 29 10 0 0 0 9 10 16 12 2321 21 0 1 156 39 940 13 921 8015 0 1 2 0 18461 22 941 0 0 2974 15422 0 709 0 0 0 1 8 119 3 0 0 0 0 25231 0 0 0 4 0 0 0
IpExt: InNoRoutes InTruncatedPkts InMcastPkts OutMcastPkts InBcastPkts OutBcastPkts InOctets OutOctets InMcastOctets OutMcastOctets InBcastOctets OutBcastOctets
IpExt: 0 0 25308 48 725 1 24434248973 4218365129 2181277 13241 365505 65

Of course, that format is unfriendly for here, but fairly nice for scripting languages to deal with. You can see the depth and variety of information! The last line shows that this system has read 24,434,248,973 bytes and written 4,218,365,129. (It is on day nine of scraping a large website.)

当然,这种格式在这里是不友好的,但是对于脚本语言来说是非常好的。你可以看到信息的深度和多样性!最后一行显示该系统读取了24434,248,973字节,并写入了4,218,365,129。(这是一个大型网站的第9天。)

#2


0  

Poking around some more, I see that procfs contains what appears to be per process net i/o stats.

再仔细看看,我发现procfs包含每个进程net I /o统计信息。

[wally@lenovotower ~]$ cat /proc/32089/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
    lo:  622834    6102    0    0    0     0          0         0   622834    6102    0    0    0     0       0          0
  eth0: 3290609241 20752766    0    0    0     0          0         0 161708339 16831446    0    0    0     0       0          0
virbr0:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0

If this is a long-running process, then this could be used to calculate bandwidth used.

如果这是一个长期运行的进程,那么可以使用它来计算所使用的带宽。

--- edit --- Despite the path, as others have pointed out, these are the same for all processes, and therefore obviously not per-process network i/o statistics.

---编辑---尽管有路径,正如其他人指出的那样,所有进程都是一样的,因此显然不是每个进程的网络i/o统计。

#3


0  

The only way of getting per task network io statistics would be using some kind of taskstats interface (based on netlink). Unfortunately it accounts everything you can imagine BUT network information. I have done a small patch for accounting bytes on a socket write/read and two entries (for tx/rx) on taskstats to get this kind of info from my systems.

获得每个任务网络io统计信息的唯一方法是使用某种taskstats接口(基于netlink)。不幸的是,除了网络信息之外,它包含了你能想到的一切。我对套接字写/读的计费字节和taskstats的两个条目(tx/rx)做了一个小补丁,以便从我的系统中获取此类信息。

Includes:

包括:

Signed-off-by: Rafael David Tinoco <tinhocas@gmail.com>
diff --git a/include/linux/taskstats.h b/include/linux/taskstats.h
index 341dddb..b0c5990 100644
--- a/include/linux/taskstats.h
+++ b/include/linux/taskstats.h
@@ -163,6 +163,10 @@ struct taskstats {
    /* Delay waiting for memory reclaim */
    __u64   freepages_count;
    __u64   freepages_delay_total;
+
+   /* Per-task network I/O accounting */
+   __u64   read_net_bytes;         /* bytes of socket read I/O */
+   __u64   write_net_bytes;        /* bytes of socket write I/O */
 };

And source code:

和源代码:

Signed-off-by: Rafael David Tinoco <tinhocas@gmail.com>
diff --git a/include/linux/task_io_accounting.h b/include/linux/task_io_accounting.h
index bdf855c..bd45b92 100644
--- a/include/linux/task_io_accounting.h
+++ b/include/linux/task_io_accounting.h
@@ -41,5 +41,12 @@ struct task_io_accounting {
     * information loss in doing that.
     */
    u64 cancelled_write_bytes;
+
+   /* The number of bytes which this task has read from a socket */
+   u64 read_net_bytes;
+
+   /* The number of bytes which this task has written to a socket */
+   u64 write_net_bytes;
+
 #endif /* CONFIG_TASK_IO_ACCOUNTING */
 };
diff --git a/include/linux/task_io_accounting_ops.h b/include/linux/task_io_accounting_ops.h
index 4d090f9..ee8416f 100644
--- a/include/linux/task_io_accounting_ops.h
+++ b/include/linux/task_io_accounting_ops.h
@@ -12,6 +12,11 @@ static inline void task_io_account_read(size_t bytes)
    current->ioac.read_bytes += bytes;
 }

+static inline void task_io_account_read_net(size_t bytes)
+{
+   current->ioac.read_net_bytes += bytes;
+}
+
 /*
  * We approximate number of blocks, because we account bytes only.
  * A 'block' is 512 bytes
@@ -26,6 +31,11 @@ static inline void task_io_account_write(size_t bytes)
    current->ioac.write_bytes += bytes;
 }

+static inline void task_io_account_write_net(size_t bytes)
+{
+   current->ioac.write_net_bytes += bytes;
+}
+
 /*
  * We approximate number of blocks, because we account bytes only.
  * A 'block' is 512 bytes
@@ -59,6 +69,10 @@ static inline void task_io_account_read(size_t bytes)
 {
 }

+static inline void task_io_account_read_net(size_t bytes)
+{
+}
+
 static inline unsigned long task_io_get_inblock(const struct task_struct *p)
 {
    return 0;
@@ -68,6 +82,10 @@ static inline void task_io_account_write(size_t bytes)
 {
 }

+static inline void task_io_account_write_net(size_t bytes)
+{
+}
+
 static inline unsigned long task_io_get_oublock(const struct task_struct *p)
 {
    return 0;
diff --git a/include/linux/taskstats.h b/include/linux/taskstats.h
index 341dddb..b0c5990 100644
--- a/include/linux/taskstats.h
+++ b/include/linux/taskstats.h
@@ -163,6 +163,10 @@ struct taskstats {
    /* Delay waiting for memory reclaim */
    __u64   freepages_count;
    __u64   freepages_delay_total;
+
+   /* Per-task network I/O accounting */
+   __u64   read_net_bytes;         /* bytes of socket read I/O */
+   __u64   write_net_bytes;        /* bytes of socket write I/O */
 };


diff --git a/kernel/tsacct.c b/kernel/tsacct.c
index 00d59d0..b279e69 100644
--- a/kernel/tsacct.c
+++ b/kernel/tsacct.c
@@ -104,10 +104,14 @@ void xacct_add_tsk(struct taskstats *stats, struct task_struct *p)
    stats->read_bytes       = p->ioac.read_bytes;
    stats->write_bytes      = p->ioac.write_bytes;
    stats->cancelled_write_bytes = p->ioac.cancelled_write_bytes;
+   stats->read_net_bytes   = p->ioac.read_net_bytes;
+   stats->write_net_bytes  = p->ioac.write_net_bytes;
 #else
    stats->read_bytes       = 0;
    stats->write_bytes      = 0;
    stats->cancelled_write_bytes = 0;
+   stats->read_net_bytes   = 0;
+   stats->write_net_bytes  = 0;
 #endif
 }
 #undef KB
diff --git a/net/socket.c b/net/socket.c
index 769c386..dd7dbb6 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -87,6 +87,7 @@
 #include <linux/wireless.h>
 #include <linux/nsproxy.h>
 #include <linux/magic.h>
+#include <linux/task_io_accounting_ops.h>

 #include <asm/uaccess.h>
 #include <asm/unistd.h>
@@ -538,6 +539,7 @@ EXPORT_SYMBOL(sock_tx_timestamp);
 static inline int __sock_sendmsg(struct kiocb *iocb, struct socket *sock,
                             struct msghdr *msg, size_t size)
 {
+   int ret;
    struct sock_iocb *si = kiocb_to_siocb(iocb);
    int err;

@@ -550,7 +552,12 @@ static inline int __sock_sendmsg(struct kiocb *iocb, struct socket *sock,
    if (err)
            return err;

-   return sock->ops->sendmsg(iocb, sock, msg, size);
+   ret = sock->ops->sendmsg(iocb, sock, msg, size);
+
+   if (ret > 0)
+           task_io_account_write_net(ret);
+
+   return ret;
 }

 int sock_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
@@ -666,6 +673,7 @@ EXPORT_SYMBOL_GPL(sock_recv_ts_and_drops);
 static inline int __sock_recvmsg_nosec(struct kiocb *iocb, struct socket *sock,
                                   struct msghdr *msg, size_t size, int flags)
 {
+   int ret = 0;
    struct sock_iocb *si = kiocb_to_siocb(iocb);

    si->sock = sock;
@@ -674,7 +682,12 @@ static inline int __sock_recvmsg_nosec(struct kiocb *iocb, struct socket *sock,
    si->size = size;
    si->flags = flags;

-   return sock->ops->recvmsg(iocb, sock, msg, size, flags);
+   ret = sock->ops->recvmsg(iocb, sock, msg, size, flags);
+
+   if (ret > 0)
+           task_io_account_read_net(ret);
+
+   return ret;
 }

 static inline int __sock_recvmsg(struct kiocb *iocb, struct socket *sock,

#4


0  

Vnstat is the simple tool to check the the internet bandwidth usage,

Vnstat是一个简单的工具,用于检查internet带宽的使用情况,

Here is the command to install it

下面是安装命令

sudo apt-get install vnstat

and to run the vnstat you need to run the below command.

要运行vnstat,需要运行下面的命令。

vnstat

Here(check Monitor Network Bandwidth Usage) are some more option for checking daily, weekly, monthly, and top 10 day's network bandwidth usage.

这里(检查监视器网络带宽使用情况)是一些更多的选项,用于检查每日、每周、每月和前10天的网络带宽使用情况。

Hope will help everyone.

希望能帮助每个人。

#5


0  

Cat /proc/net/dev This is the one of the way to find out bandwidth usage

Cat /proc/net/dev这是找到带宽使用的方法之一。

gaddenna@gaddenna-Vostro-3546:~$ cat /proc/net/dev 
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
 wlan1: 9420966650 7703510    0    1    0     0          0         0 673178457 4296602    0    0    0     0       0          0
  eth2: 7961371946 6849173    0   10    0     0          0    167030 446826449 3289015    0    0    0     0       0          0
    lo: 48209054   473527     0    0    0     0          0         0 48209054  473527     0    0    0     0       0          0

#1


4  

Can the application be isolated on a single machine?
Does anything else have to run on the system?

应用程序可以在一台机器上进行隔离吗?系统上还需要运行其他什么吗?

If a system can be dedicated this way, periodically grab the last line from /proc/net/netstat and subtract the corresponding values of InOctets and OutOctets.

如果一个系统可以以这种方式进行专用,则定期从/proc/net/netstat获取最后一行,并减去相对应的预防接种和预防接种的值。

This system, Fedora 15, shows this after 23 days of uptime:

这个系统Fedora 15在运行23天后显示:

TcpExt: SyncookiesSent SyncookiesRecv SyncookiesFailed EmbryonicRsts PruneCalled RcvPruned OfoPruned OutOfWindowIcmps LockDroppedIcmps ArpFilter TW TWRecycled TWKilled PAWSPassive PAWSActive PAWSEstab DelayedACKs DelayedACKLocked DelayedACKLost ListenOverflows ListenDrops TCPPrequeued TCPDirectCopyFromBacklog TCPDirectCopyFromPrequeue TCPPrequeueDropped TCPHPHits TCPHPHitsToUser TCPPureAcks TCPHPAcks TCPRenoRecovery TCPSackRecovery TCPSACKReneging TCPFACKReorder TCPSACKReorder TCPRenoReorder TCPTSReorder TCPFullUndo TCPPartialUndo TCPDSACKUndo TCPLossUndo TCPLoss TCPLostRetransmit TCPRenoFailures TCPSackFailures TCPLossFailures TCPFastRetrans TCPForwardRetrans TCPSlowStartRetrans TCPTimeouts TCPRenoRecoveryFail TCPSackRecoveryFail TCPSchedulerFailed TCPRcvCollapsed TCPDSACKOldSent TCPDSACKOfoSent TCPDSACKRecv TCPDSACKOfoRecv TCPAbortOnSyn TCPAbortOnData TCPAbortOnClose TCPAbortOnMemory TCPAbortOnTimeout TCPAbortOnLinger TCPAbortFailed TCPMemoryPressures TCPSACKDiscard TCPDSACKIgnoredOld TCPDSACKIgnoredNoUndo TCPSpuriousRTOs TCPMD6NotFound TCPMD5Unexpected TCPSackShifted TCPSackMerged TCPSackShiftFallback TCPBacklogDrop TCPMinTTLDrop TCPDeferAcceptDrop IPReversePathFilter TCPTimeWaitOverflow TCPReqQFullDoCookies TCPReqQFullDrop
TcpExt: 0 0 0 0 0 0 0 0 10 0 67116 0 0 0 0 8 117271 53 18860 0 0 102295 23352211 87967244 0 16861098 118195 893786 881659 0 29 10 0 0 0 9 10 16 12 2321 21 0 1 156 39 940 13 921 8015 0 1 2 0 18461 22 941 0 0 2974 15422 0 709 0 0 0 1 8 119 3 0 0 0 0 25231 0 0 0 4 0 0 0
IpExt: InNoRoutes InTruncatedPkts InMcastPkts OutMcastPkts InBcastPkts OutBcastPkts InOctets OutOctets InMcastOctets OutMcastOctets InBcastOctets OutBcastOctets
IpExt: 0 0 25308 48 725 1 24434248973 4218365129 2181277 13241 365505 65

Of course, that format is unfriendly for here, but fairly nice for scripting languages to deal with. You can see the depth and variety of information! The last line shows that this system has read 24,434,248,973 bytes and written 4,218,365,129. (It is on day nine of scraping a large website.)

当然,这种格式在这里是不友好的,但是对于脚本语言来说是非常好的。你可以看到信息的深度和多样性!最后一行显示该系统读取了24434,248,973字节,并写入了4,218,365,129。(这是一个大型网站的第9天。)

#2


0  

Poking around some more, I see that procfs contains what appears to be per process net i/o stats.

再仔细看看,我发现procfs包含每个进程net I /o统计信息。

[wally@lenovotower ~]$ cat /proc/32089/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
    lo:  622834    6102    0    0    0     0          0         0   622834    6102    0    0    0     0       0          0
  eth0: 3290609241 20752766    0    0    0     0          0         0 161708339 16831446    0    0    0     0       0          0
virbr0:       0       0    0    0    0     0          0         0        0       0    0    0    0     0       0          0

If this is a long-running process, then this could be used to calculate bandwidth used.

如果这是一个长期运行的进程,那么可以使用它来计算所使用的带宽。

--- edit --- Despite the path, as others have pointed out, these are the same for all processes, and therefore obviously not per-process network i/o statistics.

---编辑---尽管有路径,正如其他人指出的那样,所有进程都是一样的,因此显然不是每个进程的网络i/o统计。

#3


0  

The only way of getting per task network io statistics would be using some kind of taskstats interface (based on netlink). Unfortunately it accounts everything you can imagine BUT network information. I have done a small patch for accounting bytes on a socket write/read and two entries (for tx/rx) on taskstats to get this kind of info from my systems.

获得每个任务网络io统计信息的唯一方法是使用某种taskstats接口(基于netlink)。不幸的是,除了网络信息之外,它包含了你能想到的一切。我对套接字写/读的计费字节和taskstats的两个条目(tx/rx)做了一个小补丁,以便从我的系统中获取此类信息。

Includes:

包括:

Signed-off-by: Rafael David Tinoco <tinhocas@gmail.com>
diff --git a/include/linux/taskstats.h b/include/linux/taskstats.h
index 341dddb..b0c5990 100644
--- a/include/linux/taskstats.h
+++ b/include/linux/taskstats.h
@@ -163,6 +163,10 @@ struct taskstats {
    /* Delay waiting for memory reclaim */
    __u64   freepages_count;
    __u64   freepages_delay_total;
+
+   /* Per-task network I/O accounting */
+   __u64   read_net_bytes;         /* bytes of socket read I/O */
+   __u64   write_net_bytes;        /* bytes of socket write I/O */
 };

And source code:

和源代码:

Signed-off-by: Rafael David Tinoco <tinhocas@gmail.com>
diff --git a/include/linux/task_io_accounting.h b/include/linux/task_io_accounting.h
index bdf855c..bd45b92 100644
--- a/include/linux/task_io_accounting.h
+++ b/include/linux/task_io_accounting.h
@@ -41,5 +41,12 @@ struct task_io_accounting {
     * information loss in doing that.
     */
    u64 cancelled_write_bytes;
+
+   /* The number of bytes which this task has read from a socket */
+   u64 read_net_bytes;
+
+   /* The number of bytes which this task has written to a socket */
+   u64 write_net_bytes;
+
 #endif /* CONFIG_TASK_IO_ACCOUNTING */
 };
diff --git a/include/linux/task_io_accounting_ops.h b/include/linux/task_io_accounting_ops.h
index 4d090f9..ee8416f 100644
--- a/include/linux/task_io_accounting_ops.h
+++ b/include/linux/task_io_accounting_ops.h
@@ -12,6 +12,11 @@ static inline void task_io_account_read(size_t bytes)
    current->ioac.read_bytes += bytes;
 }

+static inline void task_io_account_read_net(size_t bytes)
+{
+   current->ioac.read_net_bytes += bytes;
+}
+
 /*
  * We approximate number of blocks, because we account bytes only.
  * A 'block' is 512 bytes
@@ -26,6 +31,11 @@ static inline void task_io_account_write(size_t bytes)
    current->ioac.write_bytes += bytes;
 }

+static inline void task_io_account_write_net(size_t bytes)
+{
+   current->ioac.write_net_bytes += bytes;
+}
+
 /*
  * We approximate number of blocks, because we account bytes only.
  * A 'block' is 512 bytes
@@ -59,6 +69,10 @@ static inline void task_io_account_read(size_t bytes)
 {
 }

+static inline void task_io_account_read_net(size_t bytes)
+{
+}
+
 static inline unsigned long task_io_get_inblock(const struct task_struct *p)
 {
    return 0;
@@ -68,6 +82,10 @@ static inline void task_io_account_write(size_t bytes)
 {
 }

+static inline void task_io_account_write_net(size_t bytes)
+{
+}
+
 static inline unsigned long task_io_get_oublock(const struct task_struct *p)
 {
    return 0;
diff --git a/include/linux/taskstats.h b/include/linux/taskstats.h
index 341dddb..b0c5990 100644
--- a/include/linux/taskstats.h
+++ b/include/linux/taskstats.h
@@ -163,6 +163,10 @@ struct taskstats {
    /* Delay waiting for memory reclaim */
    __u64   freepages_count;
    __u64   freepages_delay_total;
+
+   /* Per-task network I/O accounting */
+   __u64   read_net_bytes;         /* bytes of socket read I/O */
+   __u64   write_net_bytes;        /* bytes of socket write I/O */
 };


diff --git a/kernel/tsacct.c b/kernel/tsacct.c
index 00d59d0..b279e69 100644
--- a/kernel/tsacct.c
+++ b/kernel/tsacct.c
@@ -104,10 +104,14 @@ void xacct_add_tsk(struct taskstats *stats, struct task_struct *p)
    stats->read_bytes       = p->ioac.read_bytes;
    stats->write_bytes      = p->ioac.write_bytes;
    stats->cancelled_write_bytes = p->ioac.cancelled_write_bytes;
+   stats->read_net_bytes   = p->ioac.read_net_bytes;
+   stats->write_net_bytes  = p->ioac.write_net_bytes;
 #else
    stats->read_bytes       = 0;
    stats->write_bytes      = 0;
    stats->cancelled_write_bytes = 0;
+   stats->read_net_bytes   = 0;
+   stats->write_net_bytes  = 0;
 #endif
 }
 #undef KB
diff --git a/net/socket.c b/net/socket.c
index 769c386..dd7dbb6 100644
--- a/net/socket.c
+++ b/net/socket.c
@@ -87,6 +87,7 @@
 #include <linux/wireless.h>
 #include <linux/nsproxy.h>
 #include <linux/magic.h>
+#include <linux/task_io_accounting_ops.h>

 #include <asm/uaccess.h>
 #include <asm/unistd.h>
@@ -538,6 +539,7 @@ EXPORT_SYMBOL(sock_tx_timestamp);
 static inline int __sock_sendmsg(struct kiocb *iocb, struct socket *sock,
                             struct msghdr *msg, size_t size)
 {
+   int ret;
    struct sock_iocb *si = kiocb_to_siocb(iocb);
    int err;

@@ -550,7 +552,12 @@ static inline int __sock_sendmsg(struct kiocb *iocb, struct socket *sock,
    if (err)
            return err;

-   return sock->ops->sendmsg(iocb, sock, msg, size);
+   ret = sock->ops->sendmsg(iocb, sock, msg, size);
+
+   if (ret > 0)
+           task_io_account_write_net(ret);
+
+   return ret;
 }

 int sock_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
@@ -666,6 +673,7 @@ EXPORT_SYMBOL_GPL(sock_recv_ts_and_drops);
 static inline int __sock_recvmsg_nosec(struct kiocb *iocb, struct socket *sock,
                                   struct msghdr *msg, size_t size, int flags)
 {
+   int ret = 0;
    struct sock_iocb *si = kiocb_to_siocb(iocb);

    si->sock = sock;
@@ -674,7 +682,12 @@ static inline int __sock_recvmsg_nosec(struct kiocb *iocb, struct socket *sock,
    si->size = size;
    si->flags = flags;

-   return sock->ops->recvmsg(iocb, sock, msg, size, flags);
+   ret = sock->ops->recvmsg(iocb, sock, msg, size, flags);
+
+   if (ret > 0)
+           task_io_account_read_net(ret);
+
+   return ret;
 }

 static inline int __sock_recvmsg(struct kiocb *iocb, struct socket *sock,

#4


0  

Vnstat is the simple tool to check the the internet bandwidth usage,

Vnstat是一个简单的工具,用于检查internet带宽的使用情况,

Here is the command to install it

下面是安装命令

sudo apt-get install vnstat

and to run the vnstat you need to run the below command.

要运行vnstat,需要运行下面的命令。

vnstat

Here(check Monitor Network Bandwidth Usage) are some more option for checking daily, weekly, monthly, and top 10 day's network bandwidth usage.

这里(检查监视器网络带宽使用情况)是一些更多的选项,用于检查每日、每周、每月和前10天的网络带宽使用情况。

Hope will help everyone.

希望能帮助每个人。

#5


0  

Cat /proc/net/dev This is the one of the way to find out bandwidth usage

Cat /proc/net/dev这是找到带宽使用的方法之一。

gaddenna@gaddenna-Vostro-3546:~$ cat /proc/net/dev 
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
 wlan1: 9420966650 7703510    0    1    0     0          0         0 673178457 4296602    0    0    0     0       0          0
  eth2: 7961371946 6849173    0   10    0     0          0    167030 446826449 3289015    0    0    0     0       0          0
    lo: 48209054   473527     0    0    0     0          0         0 48209054  473527     0    0    0     0       0          0