This PR moves some environmental variables to `apisix/cli/environ.lua`,
and some auxiliary codes to `apisix/cli/util.lua`, which reduces the size
of `bin/apisix`.
Support the TLS connection when communicating with etcd cluster. We added a configuration item to custom the certificate verification. Whether to setup TLS connection or not depends on the endpoints' scheme, for instance, when endpoints are:
```
etcd:
host:
- "https://127.0.0.1:2379"
- "https://127.0.0.1:3379"
```
APISIX will originate TLS connection automatically, and the Server Name Indication extention will be set by the endpoint host (`127.0.0.1` in above case). Note by default APISIX will verify the certificate, close the verification in configuration explicitly if you want to bypass it.
```
etcd:
tls:
verfiy: false
```
* for test.
* CLI: load the Lua module after updated the `package.path`.
* CI: used patch to make the CI run normal.
* bugfix: add sudo
* chore: print the location of `apisix`.
* reverted not related change.
curl http://****/apisix/admin/plugin_metadata/http-logger -d '
{
"log_format": {
"host": "$host",
"@timestamp": "$time_iso8601",
"client_ip": "$remote_addr"
}
}'
when we enabled plugin http-logger, we will get the message body like:
{"host":"localhost","@timestamp":"2020-09-23T18:29:07-04:00","client_ip":"127.0.0.1","route_id":"1"}
{"host":"localhost","@timestamp":"2020-09-23T18:29:07-04:00","client_ip":"127.0.0.1","route_id":"1"}
ewma is a different balancing implementation that will generate a weight for every backend IP based on the last server response time, basically it tries to dispatch more requests to the backends that reply faster, supposing that they are less loaded.
fix#1996
* perf: no longer generate unnecessary nginx conf for better performance.
* benchmark: sync nginx.conf for fake-apisix.
> Is this PR backward compatible?
Disable two plugins by default(proxy-cache, proxy-mirror), if the user wants to enable them, need to modify the conf/config.yaml by manual.
* optimize: Use lru to avoid resolving IP addresses repeatedly .
Cached the global rules to `ctx` .
* optimzie: used a longer time interval for etcd and flush access log.
* optimize: return upstream node directly if the count is 1 .
* optimize: avoid to cache useless variable.
* yousali:<log>Optimize the buffer size and flush time
1. buffer=4096 is better for Writes of more than PIPE_BUF bytes may be nonatomic
2. flush=1. Since the log buffer is lowered, the flush time should also be lowered.
* yousali:<fix>
hi, I also made a test.
```
4096 Requests/sec: 16079.75
8192 Requests/sec: 16389.52
16384 Requests/sec: 16395.30
32768 Requests/sec: 16459.71
```
I think a log buffer size of 8192 or 16384 would be appropriate.
On the other hand, the refresh time of 3 seconds is still relatively long, and 1 or 3 seconds doesn't particularly affect QPS.
So I also agree with `buffer=16384 flush=1; `