* breakdown dynconfig.ConfigBackends into smaller pieces
* merge weight and scaling dynamic checks and updates
* fix equality of backends
* set ip/port/weight when dynamically removing endpoints
* dynamically remove endpoint even if dynamic scaling is turned off
* v(2) logs
- Under high load situations (hundreds of services, certs, and ingress rules), we have seen infrequent but consistent cases of haproxy not reflecting the on-disk config file. On theory is a race condition on new haproxy processes pointing to an ever-changing config file. This commit gives each haproxy process its own config file that does not change. Not only should this remove the risk of loading a partially written file, it will also help debug the issue because we see a snapshot of the config used for each haproxy process.
- Config files are left on disk up to a max threshold, configurable via environment variable, to allow debugging issues.
- Not supported under with reloadStrategy=multibinder since multibinder-haproxy-wrapper currently only supports receiving a USR2 signal to reload a fixed config file
When a pod starts terminating or the readiness check fails, leave the pod as a service
in the load balancer, but set its weight to 0 so that HAProxy does not send ordinary
traffic to the pod. Traffic using persistence, however, will be directed to the terminating
pods. This allows persistent requests to continue to flow to a terminating pod. Updates to
only the draining state are made using the stats socket rather than forcing a full reload
of HAProxy.