为什么基于事件的网络应用程序本身比线程应用程序更快?

时间:2022-06-10 02:32:30

We've all read the benchmarks and know the facts - event-based asynchronous network servers are faster than their threaded counterparts. Think lighttpd or Zeus vs. Apache or IIS. Why is that?

我们都已阅读基准并了解事实 - 基于事件的异步网络服务器比线程对等服务器更快。想想lighttpd或Zeus vs. Apache或IIS。这是为什么?

4 个解决方案

#1


5  

I think event based vs thread based is not the question - it is a nonblocking Multiplexed I/O, Selectable sockets, solution vs thread pool solution.

我认为基于事件和基于线程不是问题 - 它是一个非阻塞的多路复用I / O,可选套接字,解决方案与线程池解决方案。

In the first case you are handling all input that comes in regardless of what is using it- so there is no blocking on the reads- a single 'listener'. The single listener thread passes data to what can be worker threads of different types- rather than one for each connection. Again, no blocking on writing any of the data- so the data handler can just run with it separately. Because this solution is mostly IO reads/writes it doesn't occupy much CPU time- thus your application can take that to do whatever it wants.

在第一种情况下,无论使用什么输入,您都会处理所有输入 - 因此读取时没有阻塞 - 单个“监听器”。单个侦听器线程将数据传递给可以是不同类型的工作线程的数据,而不是每个连接一个。同样,在写入任何数据时都没有阻塞 - 因此数据处理程序可以单独运行它。因为这个解决方案主要是IO读/写,所以它不会占用太多的CPU时间 - 因此您的应用程序可以采取这样做来做任何想做的事情。

In a thread pool solution you have individual threads handling each connection, so they have to share time to context switch in and out- each one 'listening'. In this solution the CPU + IO ops are in the same thread- which gets a time slice- so you end up waiting on IO ops to complete per thread (blocking) which could traditionally be done without using CPU time.

在线程池解决方案中,您有各自的线程处理每个连接,因此他们必须共享上下文切换的时间 - 每个人都在“倾听”。在这个解决方案中,CPU + IO操作在同一个线程中 - 它获得一个时间片 - 所以你最终等待IO操作完成每个线程(阻塞),传统上可以在不使用CPU时间的情况下完成。

Google for non-blocking IO for more detail- and you can prob find some comparisons vs. thread pools too.

Google提供了非阻塞IO以获取更多详细信息 - 您也可以找到一些与线程池相比较的比较。

(if anyone can clarify these points, feel free)

(如果有人可以澄清这些要点,请随意)

#2


3  

Event-driven applications are not inherently faster.

事件驱动的应用程序本身并不快。

From Why Events Are a Bad Idea (for High-Concurrency Servers):

从什么事件是一个坏主意(对于高并发服务器):

We examine the claimed strengths of events over threads and show that the
weaknesses of threads are artifacts of specific threading implementations
and not inherent to the threading paradigm. As evidence, we present a
user-level thread package that scales to 100,000 threads and achieves
excellent performance in a web server.

This was in 2003. Surely the state of threading on modern OSs has improved since then.

这是在2003年。从那时起,现代操作系统的线程状态肯定有所改善。

Writing the core of an event-based server means re-inventing cooperative multitasking (Windows 3.1 style) in your code, most likely on an OS that already supports proper pre-emptive multitasking, and without the benefit of transparent context switching. This means that you have to manage state on the heap that would normally be implied by the instruction pointer or stored in a stack variable. (If your language has them, closures ease this pain significantly. Trying to do this in C is a lot less fun.)

编写基于事件的服务器的核心意味着在代码中重新发明协作式多任务(Windows 3.1样式),最有可能在已经支持正确的先发制人多任务处理的操作系统上,并且没有透明的上下文切换的好处。这意味着您必须管理堆上的状态,该状态通常由指令指针隐含或存储在堆栈变量中。 (如果你的语言有它们,那么闭包可以显着减轻这种痛苦。尝试在C语言中这样做很有趣。)

This also means you gain all of the caveats cooperative multitasking implies. If one of your event handlers takes a while to run for any reason, it stalls that event thread. Totally unrelated requests lag. Even lengthy CPU-invensive operations have to be sent somewhere else to avoid this. When you're talking about the core of a high-concurrency server, 'lengthy operation' is a relative term, on the order of microseconds for a server expected to handle 100,000 requests per second. I hope the virtual memory system never has to pull pages from disk for you!

这也意味着您获得了合作多任务所暗示的所有警告。如果您的某个事件处理程序因任何原因需要一段时间才能运行,那么它将停止该事件线程。完全不相关的请求滞后。即使是冗长的CPU耗费操作也必须在其他地方发送以避免这种情况。当您谈论高并发服务器的核心时,“冗长操作”是一个相对术语,对于预期每秒处理100,000个请求的服务器,其大小为微秒。我希望虚拟内存系统永远不必为您从磁盘中提取页面!

Getting good performance from an event-based architecture can be tricky, especially when you consider latency and not just throughput. (Of course, there are plenty of mistakes you can make with threads as well. Concurrency is still hard.)

从基于事件的体系结构中获得良好的性能可能很棘手,尤其是在考虑延迟而不仅仅是吞吐量时。 (当然,你也可以通过线程犯很多错误。并发性仍然很难。)

A couple important questions for the author of a new server application:

对于新服务器应用程序的作者来说,有几个重要问题:

  • How do threads perform on the platforms you intend to support today? Are they going to be your bottleneck?
  • 线程如何在您今天打算支持的平台上执行?它们会成为你的瓶颈吗?

  • If you're still stuck with a bad thread implementation: why is nobody fixing this?
  • 如果你仍然坚持一个糟糕的线程实现:为什么没有人修复这个?

#3


1  

It really depends what you're doing; event-based programming is certainly tricky for nontrivial applications. Being a web server is really a very trivial well understood problem and both event-driven and threaded models work pretty well on modern OSs.

这真的取决于你在做什么;对于非常重要的应用程序来说,基于事件的编程肯定是棘手的。作为一个Web服务器实际上是一个非常简单易懂的问题,事件驱动和线程模型在现代操作系统上都能很好地工作。

Correctly developing more complex server applications in an event model is generally pretty tricky - threaded applications are much easier to write. This may be the deciding factor rather than performance.

在事件模型中正确开发更复杂的服务器应用程序通常非常棘手 - 线程应用程序更容易编写。这可能是决定因素而不是表现。

#4


0  

It isn't about the threads really. It is about the way the threads are used to service requests. For something like lighttpd you have a single thread that services multiple connections via events. For older versions of apache you had a process per connection and the process woke up on incoming data so you ended up with a very large number when there were lots of requests. Now however with MPM apache is event based as well see apache MPM event.

这不是真正的线程。它是关于线程用于服务请求的方式。对于像lighttpd这样的东西,你有一个通过事件为多个连接提供服务的线程。对于旧版本的apache,每个连接都有一个进程,并且进程在传入数据时醒来,因此当有大量请求时,最终会得到一个非常大的数字。现在使用MPM apache是​​基于事件的,也可以看到apache MPM事件。

#1


5  

I think event based vs thread based is not the question - it is a nonblocking Multiplexed I/O, Selectable sockets, solution vs thread pool solution.

我认为基于事件和基于线程不是问题 - 它是一个非阻塞的多路复用I / O,可选套接字,解决方案与线程池解决方案。

In the first case you are handling all input that comes in regardless of what is using it- so there is no blocking on the reads- a single 'listener'. The single listener thread passes data to what can be worker threads of different types- rather than one for each connection. Again, no blocking on writing any of the data- so the data handler can just run with it separately. Because this solution is mostly IO reads/writes it doesn't occupy much CPU time- thus your application can take that to do whatever it wants.

在第一种情况下,无论使用什么输入,您都会处理所有输入 - 因此读取时没有阻塞 - 单个“监听器”。单个侦听器线程将数据传递给可以是不同类型的工作线程的数据,而不是每个连接一个。同样,在写入任何数据时都没有阻塞 - 因此数据处理程序可以单独运行它。因为这个解决方案主要是IO读/写,所以它不会占用太多的CPU时间 - 因此您的应用程序可以采取这样做来做任何想做的事情。

In a thread pool solution you have individual threads handling each connection, so they have to share time to context switch in and out- each one 'listening'. In this solution the CPU + IO ops are in the same thread- which gets a time slice- so you end up waiting on IO ops to complete per thread (blocking) which could traditionally be done without using CPU time.

在线程池解决方案中,您有各自的线程处理每个连接,因此他们必须共享上下文切换的时间 - 每个人都在“倾听”。在这个解决方案中,CPU + IO操作在同一个线程中 - 它获得一个时间片 - 所以你最终等待IO操作完成每个线程(阻塞),传统上可以在不使用CPU时间的情况下完成。

Google for non-blocking IO for more detail- and you can prob find some comparisons vs. thread pools too.

Google提供了非阻塞IO以获取更多详细信息 - 您也可以找到一些与线程池相比较的比较。

(if anyone can clarify these points, feel free)

(如果有人可以澄清这些要点,请随意)

#2


3  

Event-driven applications are not inherently faster.

事件驱动的应用程序本身并不快。

From Why Events Are a Bad Idea (for High-Concurrency Servers):

从什么事件是一个坏主意(对于高并发服务器):

We examine the claimed strengths of events over threads and show that the
weaknesses of threads are artifacts of specific threading implementations
and not inherent to the threading paradigm. As evidence, we present a
user-level thread package that scales to 100,000 threads and achieves
excellent performance in a web server.

This was in 2003. Surely the state of threading on modern OSs has improved since then.

这是在2003年。从那时起,现代操作系统的线程状态肯定有所改善。

Writing the core of an event-based server means re-inventing cooperative multitasking (Windows 3.1 style) in your code, most likely on an OS that already supports proper pre-emptive multitasking, and without the benefit of transparent context switching. This means that you have to manage state on the heap that would normally be implied by the instruction pointer or stored in a stack variable. (If your language has them, closures ease this pain significantly. Trying to do this in C is a lot less fun.)

编写基于事件的服务器的核心意味着在代码中重新发明协作式多任务(Windows 3.1样式),最有可能在已经支持正确的先发制人多任务处理的操作系统上,并且没有透明的上下文切换的好处。这意味着您必须管理堆上的状态,该状态通常由指令指针隐含或存储在堆栈变量中。 (如果你的语言有它们,那么闭包可以显着减轻这种痛苦。尝试在C语言中这样做很有趣。)

This also means you gain all of the caveats cooperative multitasking implies. If one of your event handlers takes a while to run for any reason, it stalls that event thread. Totally unrelated requests lag. Even lengthy CPU-invensive operations have to be sent somewhere else to avoid this. When you're talking about the core of a high-concurrency server, 'lengthy operation' is a relative term, on the order of microseconds for a server expected to handle 100,000 requests per second. I hope the virtual memory system never has to pull pages from disk for you!

这也意味着您获得了合作多任务所暗示的所有警告。如果您的某个事件处理程序因任何原因需要一段时间才能运行,那么它将停止该事件线程。完全不相关的请求滞后。即使是冗长的CPU耗费操作也必须在其他地方发送以避免这种情况。当您谈论高并发服务器的核心时,“冗长操作”是一个相对术语,对于预期每秒处理100,000个请求的服务器,其大小为微秒。我希望虚拟内存系统永远不必为您从磁盘中提取页面!

Getting good performance from an event-based architecture can be tricky, especially when you consider latency and not just throughput. (Of course, there are plenty of mistakes you can make with threads as well. Concurrency is still hard.)

从基于事件的体系结构中获得良好的性能可能很棘手,尤其是在考虑延迟而不仅仅是吞吐量时。 (当然,你也可以通过线程犯很多错误。并发性仍然很难。)

A couple important questions for the author of a new server application:

对于新服务器应用程序的作者来说,有几个重要问题:

  • How do threads perform on the platforms you intend to support today? Are they going to be your bottleneck?
  • 线程如何在您今天打算支持的平台上执行?它们会成为你的瓶颈吗?

  • If you're still stuck with a bad thread implementation: why is nobody fixing this?
  • 如果你仍然坚持一个糟糕的线程实现:为什么没有人修复这个?

#3


1  

It really depends what you're doing; event-based programming is certainly tricky for nontrivial applications. Being a web server is really a very trivial well understood problem and both event-driven and threaded models work pretty well on modern OSs.

这真的取决于你在做什么;对于非常重要的应用程序来说,基于事件的编程肯定是棘手的。作为一个Web服务器实际上是一个非常简单易懂的问题,事件驱动和线程模型在现代操作系统上都能很好地工作。

Correctly developing more complex server applications in an event model is generally pretty tricky - threaded applications are much easier to write. This may be the deciding factor rather than performance.

在事件模型中正确开发更复杂的服务器应用程序通常非常棘手 - 线程应用程序更容易编写。这可能是决定因素而不是表现。

#4


0  

It isn't about the threads really. It is about the way the threads are used to service requests. For something like lighttpd you have a single thread that services multiple connections via events. For older versions of apache you had a process per connection and the process woke up on incoming data so you ended up with a very large number when there were lots of requests. Now however with MPM apache is event based as well see apache MPM event.

这不是真正的线程。它是关于线程用于服务请求的方式。对于像lighttpd这样的东西,你有一个通过事件为多个连接提供服务的线程。对于旧版本的apache,每个连接都有一个进程,并且进程在传入数据时醒来,因此当有大量请求时,最终会得到一个非常大的数字。现在使用MPM apache是​​基于事件的,也可以看到apache MPM事件。