nginx upstream [warn] load balancing method redefined 处理办法

<pre class="wp-block-code"><code>业务同事反馈新增consistent hash url的时候有报错, "load balancing method redefined"

看了下配置, 大致是这样子
<p>upstream 2165 {
keepalive 4096;
check interval=10000 rise=2 fall=3 timeout=3000 type=http default_down=false;
check_http_send 'GET /check_alive HTTP/1.0\r\nHost: check.sohuitc.cn\r\n\r\n';
check_http_expect_alive http_2xx;
server 1.1.1.1 max_fails=0;
server 1.1.1.2 max_fails=0;
hash $request_uri consistent;
}</p>
简单搜索了下, 有这么个答案:
https:&#47;&#47;ma.ttias.be/nginx-nginx-warn-load-balancing-method-redefined/
答案是: "You can mix keepalive and least_conn, but you should define least_conn before keepalive.", 就是把LB的method 放到Keepalive 指令的后边, 考虑到least_conn 和 hash其实都是LB的method

那么, 这是为什么呢?
参考nginx官方的文档, https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
<p><code>When using load balancing methods other than the default round-robin method, it is necessary to activate them before the keepalive directive.</code></p>
官方文档要求把keepalive指令放到其他非RR的LB method的后边

我还想知道为什么呢?
翻开nginx的代码, 可以看到我这个hash 函数有个判断
http/modules/ngx_http_upstream_hash_module.c
<p>
    if (uscf-&gt;peer.init_upstream) {
        ngx_conf_log_error(NGX_LOG_WARN, cf, 0,
                           "load balancing method redefined");
    }</code>而Keepalive指令, 会尝试去初始化这个 <span style="background-color: initial; font-family: inherit; font-size: 0.857143rem;">uscf-&gt;peer.init_upstream</p>
http/modules/ngx_http_upstream_keepalive_module.c
<p>
  kcf-&gt;original_init_upstream = uscf-&gt;peer.init_upstream
                                  ? uscf-&gt;peer.init_upstream
                                  : ngx_http_upstream_init_round_robin;

    uscf-&gt;peer.init_upstream = ngx_http_upstream_init_keepalive;</p>
所以如果Keepalive指令在前边, 准备加载LB method的时候, 会发现这个变量已经被初始化了, 会认为是有其他LB method被加载? 就会有个<span style="background-color: initial; font-family: inherit; font-size: 0.857143rem;">load balancing method redefined</span> 告警了

另外, 这个问题其实是由于另一个小的配置问题引出来的, 这里一并记录下
我们的配置有个报错, "enable hash balancing method support parameter backup", upstream中有backup配置也需要新增hash method, 触发了这个问题
参考文档https://forum.nginx.org/read.php?29,281365,281369#msg-281369
<p>Generally, hash methods don't support the "backup" parameter,
but for those who need backup when falling back to round robin,
there's a work around: put the "hash" directive after the
"server" directives in the "upstream" block.</p>
因此需要把hash 挪到后边, 这就刚好跟keepalive指令有了前后的冲突</code></pre>
从1.12版本测试结果看, hash 和backup一起用, 会导致backup无法起作用, 需要特别留意, 而1.25系列则正常, 估计是做了修复

redmi AX6000 install openwrt & KMS

follow this link : https://openwrt.org/toh/xiaomi/redmi_ax6000

this version do not contain luci, you should install it

install web luci

opkg update
opkg install luci
opkg install luci-ssl
/etc/init.d/uhttpd restart

install openconnect

opkg install luci-proto-openconnect

install KMS

wget “https://dl.openwrt.ai/packages-23.05/aarch64_cortex-a53/kiddin9/vlmcsd_svn1113-21_aarch64_cortex-a53.ipk”

wget “https://dl.openwrt.ai/packages-23.05/aarch64_cortex-a53/kiddin9/luci-app-vlmcsd_git-24.217.56735-8015371_all.ipk”

opkg install luci-compat

opkg install vlmcsd_svn1113-21_aarch64_cortex-a53.ipk

opkg install luci-app-vlmcsd_git-24.217.56735-8015371_all.ipk

rockylinux 9 和Fedora 36的rpmbuid python模块依赖

在我的rpmbuild SPEC里边刚好需要用到Google depot_tools的ninja

本来 正常编译pip install ninja就完事了,但是在rpm环境里发现出错了

ModuleNotFoundError: No module named ‘ninia

这个事情很奇怪, 查了下资料, 在rpmbuild 环境里边python -m site 和命令行 发现了不同

rpmbuild里边的sys.path

sys.path = [
‘/root/rpmbuild/BUILD’,
‘/usr/lib64/python39.zip’,
‘/usr/lib64/python3.9’,
‘/usr/lib64/python3.9/lib-dynload’,
‘/usr/lib64/python3.9/site-packages’,
‘/usr/lib/python3.9/site-packages’,
]

命令行里边的sys.path

sys.path = [
‘/root/nginx’,
‘/usr/lib64/python39.zip’,
‘/usr/lib64/python3.9’,
‘/usr/lib64/python3.9/lib-dynload’,
‘/usr/local/lib64/python3.9/site-packages’,
‘/usr/lib64/python3.9/site-packages’,
‘/usr/lib/python3.9/site-packages’,
]

检查python的配置文件可以看到, 识别到RPMBUILD环境, 就会去掉/local/这个路径的包

当然, 解决办法也很简单, 在rpmbuild spec里边加个pip install ninja就行了

这是 rocky linux 9 的python 3.9 以及 Fedora 36 的python 3.10开始引入的一个变化

参考文档:

https://fedoraproject.org/wiki/Changes/Making_sudo_pip_safe

https://hackmd.io/@python-maint/BkqScKJW5

https://bugzilla.redhat.com/show_bug.cgi?id=1937494

https://github.com/rhinstaller/anaconda/pull/3646

nginx1.25.0支持HTTP3和QUIC

nginx1.25.0 mainline已经支持了HTTP3和QUIC了

需要 编译工具 gcc g++ cmake go

yum install -y g++ gcc cmake go git

git clone https://boringssl.googlesource.com/boringssl

#这是nginx的旧方式, 也可以

cd boringssl && mkdir build && cd build && cmake .. && make && cd ../../

#新方式, 有点bug

#cd boringssl && mkdir build && cmake -B build && make -C build && cd ../

编译最新版boringssl go的版本>1.18.9, 我的1.16版本出错了升级后正常, CMake 3.10 or higher is required

这里开始编译nginx, 注意跟quic.nginx.org的测试版有不同, 没有了quic_stream模块

cd nginx-1.25.0
./configure
    --with-debug
    --with-http_v3_module
    --with-cc-opt="-I../boringssl/include"
    --with-ld-opt="-L../boringssl/build/ssl
                   -L../boringssl/build/crypto"
make && make install 

配置, 这里跟quic.nginx.org的测试版有不同, 没有了http3 取而代之的是listen 443 quic

http {
    log_format quic '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" "$http3"';

    access_log logs/access.log quic;

    server {
        # for better compatibility it's recommended
        # to use the same port for quic and https
        listen 443 quic reuseport;
        listen 443 ssl http2 reuseport backlog=8192;;

        ssl_certificate     certs/example.com.crt;
        ssl_certificate_key certs/example.com.key;

        location / {
            # required for browsers to direct them to quic port
            add_header Alt-Svc 'h3=":443"; ma=86400';
        }
    }
}

这里解释下配置:

listen 443 quic reuseport; #配置H3协议守护, 注意reuseport 放在默认虚机即可

add_header Alt-Svc ‘h3=”:443″; ‘; #这个是告知客户端支持H3, 需要这个才会访问到H3

参考文档:

https://nginx.org/en/docs/quic.html

https://boringssl.googlesource.com/boringssl/+/HEAD/BUILDING.md

使用line-profiler进行python代码调优

这里主要介绍下line-profiler

https://pypi.org/project/line-profiler/

pip install line_profiler

如果是python2,则使用3.1.0, 这是最后可用的版本

pip2 install line_profiler==3.1.0

然后在需要监测的代码函数块前边加上

@profile

然后执行

kernprof -l -v your_python_scripts.py

就能看到以行为单位的执行时间占比, 从而分析出代码的性能问题主要出在什么地方

python2 pip 安装和升级的问题

一些古老的代码是基于python2的, 需要安装一些模块的时候发现提示出错了

#pip install requests

You are using pip version 7.1.0, however version 23.1.2 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.
Collecting requests
/usr/lib/python2.6/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning
/usr/lib/python2.6/site-packages/pip/vendor/requests/packages/urllib3/util/ssl.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
InsecurePlatformWarning

看提示是SSL 协商失败了, 嗯嗯, 猜测是pip的SSL版本过于旧, 需要升级下pip,结果又失败了

#pip install –upgrade pip

You are using pip version 7.1.0, however version 23.1.2 is available.
You should consider upgrading via the ‘pip install –upgrade pip’ command.
Collecting pip
Using cached https://files.pythonhosted.org/packages/fa/ee/74ff76da0ab649eec7581233daeb43d8aa35383d8f75317b2ab3b80c922f/pip-23.1.2.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File “”, line 20, in
File “/tmp/pip-build-p1lQDp/pip/setup.py”, line 7
def read(rel_path: str) -> str:
^
SyntaxError: invalid syntax

----------------------------------------

Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-build-p1lQDp/pip

其实多年以前pip的在线安装和升级已经不支持python2了, 需要手工升级下

https://bootstrap.pypa.io/pip/2.7/get-pip.py

https://bootstrap.pypa.io/pip/2.6/get-pip.py

这两个分别是python的2.6 和 2.7版本的pip手工升级包, 当前时间的版本应该是pip-20.3.4, 下载后用python执行就可以自动升级pip了, 然后安装其他模块也没有问题

SecureCRT 7.2.6在MacOS 13 Ventura不兼容

Version 7.2.6在用的SecureCRT是这个版本, 升级到MacOS Ventura就直接crash了

点击报告看看详细信息: Library not loaded: /System/Library/Frameworks/Python.framework/Versions/2.5/Python

发现是丢失了python 依赖, 原来这个版本开始移除了默认的python2

解决办法比较简单: 新安装一个2.7版本的:

https://www.python.org/ftp/python/2.7/python-2.7-macosx10.3.dmg

安装完毕后新开一个terminal:

% which python

/Library/Frameworks/Python.framework/Versions/2.7/bin/python

%cd /Library/Frameworks/Python.framework/Versions/

%sudo ln -s 2.7 2.5

这是新建个软链欺骗下系统, 因为2.5 和 2.7 的python没有太大的区别

然后打开 SecureCRT, 发现就一切OK了

nginx range回源 和 range slice回源

range request in nginx reverse proxy

nginx proxy_cache 模式下是否支持range, 取决于源站是否返回了: Accept-Range Header 

在源站没有明确支持range的请求下, 即便nginx cache 了整个文件, 也不会响应任何range 请求, 会返回整个文件

这个行为方式可以通过 修改proxy_force_range on;来修改

Syntax:proxy_force_ranges on | off;
Default:proxy_force_ranges off;
Context:httpserverlocation

This directive appeared in version 1.7.7.

Enables byte-range support for both cached and uncached responses from the proxied server regardless of the “Accept-Ranges” field in these responses.

nginx 有个特殊的模块叫做http_slice_module, 可以支持分片回源

比如如下配置, 会以1M为文件块跟源站回源, 并存成多份1M分割的cache

slice              1m;

    proxy_cache_key    $host$uri$slice_range;

    proxy_set_header   Range $slice_range;

    proxy_http_version 1.1;

    proxy_cache_valid 200  206 300m;

从测试看, 即便开启了proxy_force_range on和slice range 回源, 在源站没有明确返回Accept-Range的情况下, nginx 依然不会使用slice range回源, 保持了良好的适配性

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges

https://forum.nginx.org/read.php?2,281946,281946#msg-281946

proxy_pass 建议写法

            location /v {
                proxy_pass https://gs.x.sohu.com;
            }
看个简单的例子, 一般为了方便我们会直接把域名写在proxy_pass的后边, 这会导致当这个域名无法解析的时候,比如机房下线了,比如第三方回源的域名被摘掉了, 导致nginx restart失败
建议的写法是:
                location /v {
                     set $new_host "gs.x.sohu.com";
                     proxy_pass https://$new_host;
                }
两者的不同之处在于: 
1. proxy_pass直接跟域名的话, 会且只会在nginx 服务起来的时候解析一次, 失败则起不来
2. proxy_pass跟一个域名变量的话, 是调用resolver的, 跟随resolver的配置, 包括有效时间等, 有请求的时候才解析, 失败了也不影响全局服务