C ++中的快速跨平台进程间通信

时间:2021-08-21 14:33:12

I'm looking for a way to get two programs to efficiently transmit a large amount of data to each other, which needs to work on Linux and Windows, in C++. The context here is a P2P network program that acts as a node on the network and runs continuously, and other applications (which could be games hence the need for a fast solution) will use this to communicate with other nodes in the network. If there's a better solution for this I would be interested.

我正在寻找一种方法来让两个程序有效地相互传输大量数据,这需要在Linux和Windows上以C ++工作。这里的上下文是P2P网络程序,其充当网络上的节点并且连续运行,并且其他应用程序(可能是游戏因此需要快速解决方案)将使用它来与网络中的其他节点通信。如果有更好的解决方案,我会感兴趣。

6 个解决方案

#1


15  

boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.

boost :: asio是一个跨平台的库,通过套接字处理异步io。您可以将此与使用例如Google Protocol Buffers结合使用,以获得实际消息。

Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.

Boost还为同一台机器上的进程间通信提供了boost :: interprocess,但是asio允许您异步进行通信,并且可以轻松地为本地和远程连接提供相同的处理程序。

#2


6  

I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.

我一直在使用ZeroC(www.zeroc.com)的ICE,这太棒了。超级易用,它不仅是跨平台的,而且还支持多种语言(python,java等)甚至是库的嵌入式版本。

#3


3  

Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)

好吧,如果我们可以假设这两个进程在同一台机器上运行,那么它们来回传输大量数据的最快方法是将数据保存在共享内存区域内;使用该设置,数据永远不会被复制,因为两个进程都可以直接访问它。 (如果你想更进一步,你可以将两个程序合并为一个程序,每个前'进程'现在作为同一进程空间内的一个线程运行。在这种情况下,它们将自动共享100%的内存彼此)

Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.

当然,在大多数情况下,仅拥有共享内存区域是不够的:您还需要某种同步机制,以便进程可以安全地读取和更新共享数据,而不会相互绊倒。我这样做的方法是在共享内存区域创建两个双端队列(每个进程发送一个)。使用无锁FIFO队列类,或者为每个双端队列提供一个信号量/互斥量,您可以使用它来序列化将数据项推送到队列中并将数据项弹出队列。 (请注意,您要放入队列的数据项只是指向实际数据缓冲区的指针,而不是数据本身......否则您将回到复制大量数据,这是您要避免的使用shared_ptrs而不是普通的C指针是个好主意,这样当使用它完成接收过程时,“旧”数据将自动释放。一旦你拥有了它,你需要的唯一另一件事就是进程A在它刚刚将一个项目放入队列以便B接收时通知进程B的方式(反之亦然)...我通常通过将一个字节写入另一个进程是select()的管道中,导致另一个进程唤醒并检查其队列,但还有其他方法可以执行此操作。

#4


2  

This is a hard problem.

这是一个难题。

The bottleneck is the internet, and that your clients might be on NAT.

瓶颈是互联网,你的客户可能在NAT上。

If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.

如果您不是在谈论互联网,或者您明确没有运营商级恶意NAT的客户,您需要说。

Because it boils down to: use TCP. Suck it up.

因为它归结为:使用TCP。吸吮它。

#5


1  

I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.

我强烈建议在TCP或UDP套接字之上使用Protocol Buffers。

#6


1  

So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.

因此,虽然其他答案涵盖了部分问题(套接字库),但它们并没有告诉您NAT问题。不要让你的用户修改他们的路由器,最好使用一些技术,让你通过一个模糊的理智的路由器,没有额外的配置。您需要使用所有这些来获得最佳兼容性。

First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.

首先,这里的ICE库是一种NAT遍历技术,可以在网络中使用STUN和/或TURN服务器。尽管有一些公共STUN服务器,但您可能必须为此提供一些基础设施才能工作。

Second, use both UPnP and NAT-PMP. One library here, for example.

其次,同时使用UPnP和NAT-PMP。例如,这里有一个图书馆。

Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.

第三,使用IPv6。 Teredo是运行IPv6 over IPv4的一种方式,当上述任何一种方式都没有时,通常可以工作,并且谁知道,您的用户可能通过其他方式使用IPv6。实现这一点的代码很少,而且越来越重要。例如,我发现大约一半的Bittorrent数据通过IPv6传播。

#1


15  

boost::asio is a cross platform library handling asynchronous io over sockets. You can combine this with using for instance Google Protocol Buffers for your actual messages.

boost :: asio是一个跨平台的库,通过套接字处理异步io。您可以将此与使用例如Google Protocol Buffers结合使用,以获得实际消息。

Boost also provides you with boost::interprocess for interprocess communication on the same machine, but asio lets you do your communication asynchronously and you can easily have the same handlers for both local and remote connections.

Boost还为同一台机器上的进程间通信提供了boost :: interprocess,但是asio允许您异步进行通信,并且可以轻松地为本地和远程连接提供相同的处理程序。

#2


6  

I have been using ICE by ZeroC (www.zeroc.com), and it has been fantastic. Super easy to use, and it's not only cross platform, but has support for many languages as well (python, java, etc) and even an embedded version of the library.

我一直在使用ZeroC(www.zeroc.com)的ICE,这太棒了。超级易用,它不仅是跨平台的,而且还支持多种语言(python,java等)甚至是库的嵌入式版本。

#3


3  

Well, if we can assume the two processes are running on the same machine, then the fastest way for them to transfer large quantities of data back and forth is by keeping the data inside a shared memory region; with that setup, the data is never copied at all, since both processes can access it directly. (If you wanted to go even further, you could combine the two programs into one program, with each former 'process' now running as a thread inside the same process space instead. In that case they would be automatically sharing 100% of their memory with each other)

好吧,如果我们可以假设这两个进程在同一台机器上运行,那么它们来回传输大量数据的最快方法是将数据保存在共享内存区域内;使用该设置,数据永远不会被复制,因为两个进程都可以直接访问它。 (如果你想更进一步,你可以将两个程序合并为一个程序,每个前'进程'现在作为同一进程空间内的一个线程运行。在这种情况下,它们将自动共享100%的内存彼此)

Of course, just having a shared memory area isn't sufficient in most cases: you would also need some sort of synchronization mechanism so that the processes can read and update the shared data safely, without tripping over each other. The way I would do that would be to create two double-ended queues in the shared memory region (one for each process to send with). Either use a lockless FIFO-queue class, or give each double-ended queue a semaphore/mutex that you can use to serialize pushing data items into the queue and popping data items out of the queue. (Note that the data items you'd be putting into the queues would only be pointers to the actual data buffers, not the data itself... otherwise you'd be back to copying large amounts of data around, which you want to avoid. It's a good idea to use shared_ptrs instead of plain C pointers, so that "old" data will be automatically freed when the receiving process is done using it). Once you have that, the only other thing you'd need is a way for process A to notify process B when it has just put an item into the queue for B to receive (and vice versa)... I typically do that by writing a byte into a pipe that the other process is select()-ing on, to cause the other process to wake up and check its queue, but there are other ways to do it as well.

当然,在大多数情况下,仅拥有共享内存区域是不够的:您还需要某种同步机制,以便进程可以安全地读取和更新共享数据,而不会相互绊倒。我这样做的方法是在共享内存区域创建两个双端队列(每个进程发送一个)。使用无锁FIFO队列类,或者为每个双端队列提供一个信号量/互斥量,您可以使用它来序列化将数据项推送到队列中并将数据项弹出队列。 (请注意,您要放入队列的数据项只是指向实际数据缓冲区的指针,而不是数据本身......否则您将回到复制大量数据,这是您要避免的使用shared_ptrs而不是普通的C指针是个好主意,这样当使用它完成接收过程时,“旧”数据将自动释放。一旦你拥有了它,你需要的唯一另一件事就是进程A在它刚刚将一个项目放入队列以便B接收时通知进程B的方式(反之亦然)...我通常通过将一个字节写入另一个进程是select()的管道中,导致另一个进程唤醒并检查其队列,但还有其他方法可以执行此操作。

#4


2  

This is a hard problem.

这是一个难题。

The bottleneck is the internet, and that your clients might be on NAT.

瓶颈是互联网,你的客户可能在NAT上。

If you are not talking internet, or if you explicitly don't have clients behind carrier grade evil NATs, you need to say.

如果您不是在谈论互联网,或者您明确没有运营商级恶意NAT的客户,您需要说。

Because it boils down to: use TCP. Suck it up.

因为它归结为:使用TCP。吸吮它。

#5


1  

I would strongly suggest Protocol Buffers on top of TCP or UDP sockets.

我强烈建议在TCP或UDP套接字之上使用Protocol Buffers。

#6


1  

So, while the other answers cover part of the problem (socket libraries), they're not telling you about the NAT issue. Rather than have your users tinker with their routers, it's better to use some techniques that should get you through a vaguely sane router with no extra configuration. You need to use all of these to get the best compatibility.

因此,虽然其他答案涵盖了部分问题(套接字库),但它们并没有告诉您NAT问题。不要让你的用户修改他们的路由器,最好使用一些技术,让你通过一个模糊的理智的路由器,没有额外的配置。您需要使用所有这些来获得最佳兼容性。

First, ICE library here is a NAT traversal technique that works with STUN and/or TURN servers out in the network. You may have to provide some infrastructure for this to work, although there are some public STUN servers.

首先,这里的ICE库是一种NAT遍历技术,可以在网络中使用STUN和/或TURN服务器。尽管有一些公共STUN服务器,但您可能必须为此提供一些基础设施才能工作。

Second, use both UPnP and NAT-PMP. One library here, for example.

其次,同时使用UPnP和NAT-PMP。例如,这里有一个图书馆。

Third, use IPv6. Teredo, which is one way of running IPv6 over IPv4, often works when none of the above do, and who knows, your users may have working IPv6 by some other means. Very little code to implement this, and increasingly important. I find about half of Bittorrent data arrives over IPv6, for example.

第三,使用IPv6。 Teredo是运行IPv6 over IPv4的一种方式,当上述任何一种方式都没有时,通常可以工作,并且谁知道,您的用户可能通过其他方式使用IPv6。实现这一点的代码很少,而且越来越重要。例如,我发现大约一半的Bittorrent数据通过IPv6传播。