在nginx代理后面的node.js的HTTP2

时间:2022-06-01 03:11:41

I have a node.js server running behind an nginx proxy. node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server.

我有一个在nginx代理后面运行的node.js服务器。 node.js在端口3000上运行HTTP 1.1(无SSL)服务器。两者都在同一服务器上运行。

I recently set up nginx to use HTTP2 with SSL (h2). It seems that HTTP2 is indeed enabled and working.

我最近设置nginx使用HTTP2和SSL(h2)。似乎HTTP2确实启用并正常工作。

However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance. That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?

但是,我想知道代理连接(nginx < - > node.js)使用HTTP 1.1的事实是否会影响性能。也就是说,我在速度方面缺少HTTP2的好处,因为我的内部连接是HTTP 1.1?

3 个解决方案

#1


23  

In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by low latency. These also reduce the need (and expensive) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.

通常,HTTP / 2的最大直接好处是通过多路复用为浏览器连接提供的速度提升,这通常受到低延迟的阻碍。这些也减少了多个连接的需求(和昂贵),这是尝试在HTTP / 1.1中实现类似性能优势的一种解决方案。

For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.

对于内部连接(例如,充当反向代理的Web服务器和后端应用服务器之间),延迟通常非常非常低,因此HTTP / 2的速度优势可以忽略不计。此外,每个应用服务器通常已经是一个单独的连接,所以再次没有收获。

So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.

因此,只需在边缘支持HTTP / 2,您将获得大部分性能优势。这是一种相当常见的设置 - 类似于HTTPS通常在反向代理/负载均衡器上终止的方式,而不是一直完成。

However there are potential benefits to supporting HTTP/2 all the way through. For example it allows server push from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints.

但是,一直支持HTTP / 2有潜在的好处。例如,它允许从应用程序推送服务器。由于HTTP / 2的二进制特性和报头压缩,最后一跳的数据包大小减少也带来了潜在的好处。虽然像延迟一样,带宽通常不是内部连接的问题,因此这一点的重要性是有争议的。最后一些人争辩说,反向代理连接HTTP / 2连接到HTTP / 2连接的工作少于对HTTP / 1.1连接的连接,因为不需要将一个协议转换为另一个协议,尽管如果这是偶然的我会持怀疑态度因为它们是单独的连接(除非它仅仅作为TCP传递代理),因此是显而易见的。所以,对我而言,端到端HTTP / 2的主要原因是允许端到端的服务器推送,但即便如此,使用HTTP链接头和103-早期提示也可能更好。

For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):

目前,虽然服务器仍在增加支持并且服务器推送使用率很低(并且仍在进行实验以定义最佳实践),但我建议仅在终点处使用HTTP / 2。在撰写本文时,Nginx也没有为ProxyPass连接支持HTTP / 2(虽然Apache确实如此),并且没有计划添加它,并且他们对单个HTTP / 2连接是否会引入缓慢提出了一个有趣的观点(强调我的):

Is HTTP/2 proxy support planned for the near future?

是否计划在不久的将来支持HTTP / 2代理?

Short answer:

简短回答:

No, there are no plans.

不,没有计划。

Long answer:

答案很长:

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

实现它几乎没有意义,因为主要的HTTP / 2好处是它允许在单个连接中多路复用许多请求,因此[几乎]消除了对simalteneous请求数量的限制 - 并且在与之交谈时没有这样的限制你自己的后端。此外,由于使用单个TCP连接而不是多个TCP连接,因此在使用HTTP / 2到后端时甚至可能会变得更糟。

On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module.

另一方面,在上游模块中的单个连接内实现HTTP / 2协议和请求复用将需要对上游模块进行重大改变。

Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future. If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches.

由于上述原因,至少在可预见的未来,没有计划在上游模块中实施HTTP / 2支持。如果您仍然认为需要通过HTTP / 2与后端进行通信 - 请随时提供补丁。

Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered.

最后,还应该注意的是,虽然浏览器需要HTTPS for HTTP / 2(h2),但大多数服务器不需要,因此可以通过HTTP(h2c)支持最后一跳。因此,如果节点部分上没有(因为它通常不存在),则不需要端到端加密。但是,根据后端服务器相对于前端服务器的位置,即使对于此连接使用HTTPS也许应该考虑。

#2


3  

NGINX does not support HTTP/2 as a client. As they're running on the same server and there is no latency or limited bandwidth I don't think it would make a huge different either way. I would make sure you are using keepalives between nginx and node.js.

NGINX不支持HTTP / 2作为客户端。由于它们在同一台服务器上运行,并且没有延迟或有限的带宽,我认为它不会产生很大的不同。我会确保你在nginx和node.js之间使用keepalive。

https://www.nginx.com/blog/tuning-nginx/#keepalive

https://www.nginx.com/blog/tuning-nginx/#keepalive

#3


2  

You are not losing performance in general, because nginx matches the request multiplexing the browser does over HTTP/2 by creating multiple simultaneous requests to your node backend. (One of the major performance improvements of HTTP/2 is allowing the browser to do multiple simultaneous requests over the same connection, whereas in HTTP 1.1 only one simultaneous request per connection is possible. And the browsers limit the number of connections, too. )

您通常不会失去性能,因为nginx通过为节点后端创建多个同时请求来匹配浏览器通过HTTP / 2进行多路复用的请求。 (HTTP / 2的主要性能改进之一是允许浏览器在同一连接上同时执行多个请求,而在HTTP 1.1中,每个连接只能同时发出一个请求。浏览器也会限制连接数。)

#1


23  

In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by low latency. These also reduce the need (and expensive) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.

通常,HTTP / 2的最大直接好处是通过多路复用为浏览器连接提供的速度提升,这通常受到低延迟的阻碍。这些也减少了多个连接的需求(和昂贵),这是尝试在HTTP / 1.1中实现类似性能优势的一种解决方案。

For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.

对于内部连接(例如,充当反向代理的Web服务器和后端应用服务器之间),延迟通常非常非常低,因此HTTP / 2的速度优势可以忽略不计。此外,每个应用服务器通常已经是一个单独的连接,所以再次没有收获。

So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.

因此,只需在边缘支持HTTP / 2,您将获得大部分性能优势。这是一种相当常见的设置 - 类似于HTTPS通常在反向代理/负载均衡器上终止的方式,而不是一直完成。

However there are potential benefits to supporting HTTP/2 all the way through. For example it allows server push from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints.

但是,一直支持HTTP / 2有潜在的好处。例如,它允许从应用程序推送服务器。由于HTTP / 2的二进制特性和报头压缩,最后一跳的数据包大小减少也带来了潜在的好处。虽然像延迟一样,带宽通常不是内部连接的问题,因此这一点的重要性是有争议的。最后一些人争辩说,反向代理连接HTTP / 2连接到HTTP / 2连接的工作少于对HTTP / 1.1连接的连接,因为不需要将一个协议转换为另一个协议,尽管如果这是偶然的我会持怀疑态度因为它们是单独的连接(除非它仅仅作为TCP传递代理),因此是显而易见的。所以,对我而言,端到端HTTP / 2的主要原因是允许端到端的服务器推送,但即便如此,使用HTTP链接头和103-早期提示也可能更好。

For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):

目前,虽然服务器仍在增加支持并且服务器推送使用率很低(并且仍在进行实验以定义最佳实践),但我建议仅在终点处使用HTTP / 2。在撰写本文时,Nginx也没有为ProxyPass连接支持HTTP / 2(虽然Apache确实如此),并且没有计划添加它,并且他们对单个HTTP / 2连接是否会引入缓慢提出了一个有趣的观点(强调我的):

Is HTTP/2 proxy support planned for the near future?

是否计划在不久的将来支持HTTP / 2代理?

Short answer:

简短回答:

No, there are no plans.

不,没有计划。

Long answer:

答案很长:

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

实现它几乎没有意义,因为主要的HTTP / 2好处是它允许在单个连接中多路复用许多请求,因此[几乎]消除了对simalteneous请求数量的限制 - 并且在与之交谈时没有这样的限制你自己的后端。此外,由于使用单个TCP连接而不是多个TCP连接,因此在使用HTTP / 2到后端时甚至可能会变得更糟。

On the other hand, implementing HTTP/2 protocol and request multiplexing within a single connection in the upstream module will require major changes to the upstream module.

另一方面,在上游模块中的单个连接内实现HTTP / 2协议和请求复用将需要对上游模块进行重大改变。

Due to the above, there are no plans to implement HTTP/2 support in the upstream module, at least in the foreseeable future. If you still think that talking to backends via HTTP/2 is something needed - feel free to provide patches.

由于上述原因,至少在可预见的未来,没有计划在上游模块中实施HTTP / 2支持。如果您仍然认为需要通过HTTP / 2与后端进行通信 - 请随时提供补丁。

Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered.

最后,还应该注意的是,虽然浏览器需要HTTPS for HTTP / 2(h2),但大多数服务器不需要,因此可以通过HTTP(h2c)支持最后一跳。因此,如果节点部分上没有(因为它通常不存在),则不需要端到端加密。但是,根据后端服务器相对于前端服务器的位置,即使对于此连接使用HTTPS也许应该考虑。

#2


3  

NGINX does not support HTTP/2 as a client. As they're running on the same server and there is no latency or limited bandwidth I don't think it would make a huge different either way. I would make sure you are using keepalives between nginx and node.js.

NGINX不支持HTTP / 2作为客户端。由于它们在同一台服务器上运行,并且没有延迟或有限的带宽,我认为它不会产生很大的不同。我会确保你在nginx和node.js之间使用keepalive。

https://www.nginx.com/blog/tuning-nginx/#keepalive

https://www.nginx.com/blog/tuning-nginx/#keepalive

#3


2  

You are not losing performance in general, because nginx matches the request multiplexing the browser does over HTTP/2 by creating multiple simultaneous requests to your node backend. (One of the major performance improvements of HTTP/2 is allowing the browser to do multiple simultaneous requests over the same connection, whereas in HTTP 1.1 only one simultaneous request per connection is possible. And the browsers limit the number of connections, too. )

您通常不会失去性能,因为nginx通过为节点后端创建多个同时请求来匹配浏览器通过HTTP / 2进行多路复用的请求。 (HTTP / 2的主要性能改进之一是允许浏览器在同一连接上同时执行多个请求,而在HTTP 1.1中,每个连接只能同时发出一个请求。浏览器也会限制连接数。)