集群并发连接限制

1.功能描述

  • 能够对njet 集群实现 connection连接数限制

  • 实现可认为是limit_conn_zone 以及limit_conn指令的集合体,两个指令合在一起的功能,也就是说zone的配置和连接数的设置在同一个指令完成

  • 限制功能与limit conn功能一致

2.依赖模块

njet.conf:

load_module  modules/njt_http_cluster_limit_conn_module.so;

3.指令说明

3.1 cluster_limit_conn_dry_run

  • 功能: 如果配置为on,实际不限制,但是会仍然记录请求数; 如果配置为off,则会受limit_conn限制
Syntax: cluster_limit_conn_dry_run on | off;
Default: cluster_limit_conn_dry_run off;
Context: http, server, location

3.2 cluster_limit_conn_log_level

  • 功能: 达到限制时日志打印级别
Syntax: cluster_limit_conn_log_level info | notice | warn | error;
Default: cluster_limit_conn_log_level notice;
Context: http, server, location

3.3 cluster_limit_conn_status

  • 功能: 达到限制后返回的状态码,默认是503
Syntax: cluster_limit_conn_status code;
Default: cluster_limit_conn_status 503;
Context: http, server, location

3.4 cluster_limit_conn

  • 功能:并发请求连接速限制
Syntax: cluster_limit_conn $binary_remote_addr zone=cluster2:10m conn=1;;
Default:
Context: http, server, location

4.配置样例

...
load_module  modules/njt_http_cluster_limit_conn_module.so;
...
stream {
        server {
                listen 238.255.253.254:5555 udp;
                gossip zone=test:1m heartbeat_timeout=100ms nodeclean_timeout=1s ;
        }
}
http {
   #http层
   cluster_limit_conn $binary_remote_addr zone=cluster_http:1m conn=10;
   upstream back{
                server 127.0.0.1:8008;         
                server 127.0.0.1:8009; 
   }
   
   server {
        #server层
        cluster_limit_conn $binary_remote_addr zone=cluster_server_20:1m conn=20;
        listen 8186; 
        location / {
                proxy_pass http://back;
        }
   }

   server {
        listen 9186;     
        location / { 
                #location层
                cluster_limit_conn $binary_remote_addr zone=cluster_location_40:1m conn=40;
                proxy_pass http://back;
        }
   }
}

5.测试样例

1.对节点136进行压测打满,节点请求速率为4000r/s 左右,保证能都打满单个节点

Transfer/sec:      2.22MB
[root@localhost clb]# ./wrk2 -t 1 -c 100  -d 100s -R 4000 http://192.168.40.136:80/
Running 2m test @ http://192.168.40.136:80/
  1 threads and 100 connections
  Thread calibration: mean lat.: 6.668ms, rate sampling interval: 23ms


  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.01ms    5.01ms 133.25ms   88.21%
    Req/Sec     4.09k     0.92k   11.23k    81.30%
  396002 requests in 1.67m, 171.08MB read
  Non-2xx or 3xx responses: 332097
Requests/sec:   3959.89
Transfer/sec:      1.71MB

下面得日志可看出,超出单个节点并发连接数,被503

2023/06/15 14:15:29 [notice] 10812#0: *191721 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10812#0: *191720 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10812#0: *191723 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10812#0: *191616 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10812#0: *191722 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10812#0: *191718 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191712 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191715 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191717 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191716 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191710 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191708 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191702 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191709 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"
2023/06/15 14:15:29 [notice] 10810#0: *191707 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.136:80"

2. 节点136打满的同时,对157节点进行不打满测试(-R 5 表示5r/s)

  • 157节点:
[root@localhost clb]# ./wrk2 -t 1 -c 10  -d 30s -R 5 http://192.168.40.157:8003/
Running 30s test @ http://192.168.40.157:8003/
  1 threads and 10 connections
  Thread calibration: mean lat.: 1.419ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.40ms  651.25us   4.10ms   75.56%
    Req/Sec     4.82     31.69   333.00     97.55%
  151 requests in 30.00s, 89.59KB read
  Non-2xx or 3xx responses: 80
Requests/sec:      5.03
Transfer/sec:      2.99KB
[root@localhost clb]# 

此时,157节点日志出现访问被限制日志,验证集群连接数受到限制

2023/06/15 14:14:36 [info] 1700#0: *5785 gossip_app procs 1471666138 msg while gossip receive, udp client: 192.168.40.136, server: 238.255.253.254:5555
2023/06/15 14:14:36 [notice] 1700#0: *6098 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1700#0: *6101 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1699#0: *6104 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1703#0: *6107 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1703#0: *6110 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1699#0: *6113 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1703#0: *6116 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1703#0: *6119 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1703#0: *6122 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:36 [notice] 1699#0: *6125 limiting connections by zone "cluster1", client: 192.168.40.144, server: localhost, request: "GET / HTTP/1.1", host: "192.168.40.157:8003"
2023/06/15 14:14:37 [info] 1700#0: *5785 node:node1 pid:10810 msg_type:1471666138 while gossip receive, udp client: 192.168.40.136, server: 238.255.253.254:5555
2023/06/15 14:14:37 [info] 1700#0: *5785 gossip_app procs 1471666138 msg while gossip receive, udp client: 192.168.40.136, server: 238.255.253.254:5555
2023/06/15 14:14:37 [info] 1700#0: *5785 node:node1 pid:10810 msg_type:1471666138 while gossip receive, udp client: 192.168.40.136, server: 238.255.253.254:5555