|
2 ay önce | |
---|---|---|
.. | ||
README.md | 2 ay önce |
If you need to support multiple virtual hosts for a container, you can separate each entry with commas. For example, foo.bar.com,baz.bar.com,bar.com
and each host will be setup the same.
You can also use wildcards at the beginning and the end of host name, like *.bar.com
or foo.bar.*
. Or even a regular expression, which can be very useful in conjunction with a wildcard DNS service like nip.io or sslip.io, using ~^foo\.bar\..*\.nip\.io
will match foo.bar.127.0.0.1.nip.io
, foo.bar.10.0.2.2.nip.io
and all other given IPs. More information about this topic can be found in the nginx documentation about server_names
.
To set the default host for nginx use the env var DEFAULT_HOST=foo.bar.com
for example
docker run -d -p 80:80 -e DEFAULT_HOST=foo.bar.com -v /var/run/docker.sock:/tmp/docker.sock:ro nginxproxy/nginx-proxy
nginx-proxy will then redirect all requests to a container where VIRTUAL_HOST
is set to DEFAULT_HOST
, if they don't match any (other) VIRTUAL_HOST
. Using the example above requests without matching VIRTUAL_HOST
will be redirected to a plain nginx instance after running the following command:
docker run -d -e VIRTUAL_HOST=foo.bar.com nginx
When your container exposes only one port, nginx-proxy will default to this port, else to port 80.
If you need to specify a different port, you can set a VIRTUAL_PORT
env var to select a different one. This variable cannot be set to more than one port.
For each host defined into VIRTUAL_HOST
, the associated virtual port is retrieved by order of precedence:
VIRTUAL_PORT
environment variableIf your container expose more than one service on different ports and those services need to be proxied, you'll need to use the VIRTUAL_HOST_MULTIPORTS
environment variable. This variable takes virtual host, path, port and dest definition in YAML (or JSON) form, and completely override the VIRTUAL_HOST
, VIRTUAL_PORT
, VIRTUAL_PROTO
, VIRTUAL_PATH
and VIRTUAL_DEST
environment variables on this container.
The YAML syntax should be easier to write on Docker compose files, while the JSON syntax can be used for CLI invocation.
The expected format is the following:
hostname:
path:
port: int
proto: string
dest: string
For each hostname entry, path
, port
, proto
and dest
are optional and are assigned default values when missing:
path
= "/"port
= default portproto
= "http"dest
= ""The following example use an hypothetical container running services over HTTP on port 80, 8000 and 9000:
services:
multiport-container:
image: somerepo/somecontainer
container_name: multiport-container
environment:
VIRTUAL_HOST_MULTIPORTS: |-
www.example.org:
service1.example.org:
"/":
port: 8000
service2.example.org:
"/":
port: 9000
# There is no path dict specified for www.example.org, so it get the default values:
# www.example.org:
# "/":
# port: 80 (default port)
# dest: ""
# JSON equivalent:
# VIRTUAL_HOST_MULTIPORTS: |-
# {
# "www.example.org": {},
# "service1.example.org": { "/": { "port": 8000, "dest": "" } },
# "service2.example.org": { "/": { "port": 9000, "dest": "" } }
# }
This would result in the following proxy config:
www.example.org
-> multiport-container:80
over HTTP
service1.example.org
-> multiport-container:8000
over HTTP
service2.example.org
-> multiport-container:9000
over HTTP
The following example use an hypothetical container running services over HTTP on port 80 and 8000 and over HTTPS on port 9443:
services:
multiport-container:
image: somerepo/somecontainer
container_name: multiport-container
environment:
VIRTUAL_HOST_MULTIPORTS: |-
www.example.org:
"/":
"/service1":
port: 8000
dest: "/"
"/service2":
port: 9443
proto: "https"
dest: "/"
# port and dest are not specified on the / path, so this path is routed to the
# default port with the default dest value (empty string) and default proto (http)
# JSON equivalent:
# VIRTUAL_HOST_MULTIPORTS: |-
# {
# "www.example.org": {
# "/": {},
# "/service1": { "port": 8000, "dest": "/" },
# "/service2": { "port": 9443, "proto": "https", "dest": "/" }
# }
# }
This would result in the following proxy config:
www.example.org
-> multiport-container:80
over HTTP
www.example.org/service1
-> multiport-container:8000
over HTTP
www.example.org/service2
-> multiport-container:9443
over HTTPS
You can have multiple containers proxied by the same VIRTUAL_HOST
by adding a VIRTUAL_PATH
environment variable containing the absolute path to where the container should be mounted. For example with VIRTUAL_HOST=foo.example.com
and VIRTUAL_PATH=/api/v2/service
, then requests to http://foo.example.com/api/v2/service will be routed to the container. If you wish to have a container serve the root while other containers serve other paths, give the root container a VIRTUAL_PATH
of /
. Unmatched paths will be served by the container at /
or will return the default nginx error page if no container has been assigned /
.
It is also possible to specify multiple paths with regex locations like VIRTUAL_PATH=~^/(app1|alternative1)/
. For further details see the nginx documentation on location blocks. This is not compatible with VIRTUAL_DEST
.
The full request URI will be forwarded to the serving container in the X-Original-URI
header.
[!NOTE] Your application needs to be able to generate links starting with
VIRTUAL_PATH
. This can be achieved by it being natively on this path or having an option to prepend this path. The application does not need to expect this path in the request.
This environment variable can be used to rewrite the VIRTUAL_PATH
part of the requested URL to proxied application. The default value is empty (off).
Make sure that your settings won't result in the slash missing or being doubled. Both these versions can cause troubles.
If the application runs natively on this sub-path or has a setting to do so, VIRTUAL_DEST
should not be set or empty.
If the requests are expected to not contain a sub-path and the generated links contain the sub-path, VIRTUAL_DEST=/
should be used.
$ docker run -d -e VIRTUAL_HOST=example.tld -e VIRTUAL_PATH=/app1/ -e VIRTUAL_DEST=/ --name app1 app
In this example, the incoming request http://example.tld/app1/foo
will be proxied as http://app1/foo
instead of http://app1/app1/foo
.
The same options as from Per-VIRTUAL_HOST location configuration are available on a VIRTUAL_PATH
basis.
The only difference is that the filename gets an additional block HASH=$(echo -n $VIRTUAL_PATH | sha1sum | awk '{ print $1 }')
. This is the sha1-hash of the VIRTUAL_PATH
(no newline). This is done for filename sanitization purposes.
The used filename is ${VIRTUAL_HOST}_${PATH_HASH}_location
, or when VIRTUAL_HOST
is a regex, ${VIRTUAL_HOST_HASH}_${PATH_HASH}_location
.
The filename of the previous example would be example.tld_8610f6c344b4096614eab6e09d58885349f42faf_location
.
This environment variable of the nginx proxy container can be used to customize the return error page if no matching path is found. Furthermore it is possible to use anything which is compatible with the return
statement of nginx.
Exception: If this is set to the string none
, no default location /
directive will be generated. This makes it possible for you to provide your own location /
directive in your /etc/nginx/vhost.d/VIRTUAL_HOST
or /etc/nginx/vhost.d/default
files.
If unspecified, DEFAULT_ROOT
defaults to 404
.
Examples (YAML syntax):
DEFAULT_ROOT: "none"
prevents nginx-proxy
from generating a default location /
directive.DEFAULT_ROOT: "418"
returns a 418 error page instead of the normal 404 one.DEFAULT_ROOT: "301 https://github.com/nginx-proxy/nginx-proxy/blob/main/README.md"
redirects the client to this documentation.Nginx variables such as $scheme
, $host
, and $request_uri
can be used. However, care must be taken to make sure the $
signs are escaped properly. For example, if you want to use 301 $scheme://$host/myapp1$request_uri
you should use:
DEFAULT_ROOT='301 $scheme://$host/myapp1$request_uri'
- DEFAULT_ROOT: 301 $$scheme://$$host/myapp1$$request_uri
If you want to use nginx-proxy
with different external ports that the default ones of 80
for HTTP
traffic and 443
for HTTPS
traffic, you'll have to use the environment variable(s) HTTP_PORT
and/or HTTPS_PORT
in addition to the changes to the Docker port mapping. If you change the HTTPS
port, the redirect for HTTPS
traffic will also be configured to redirect to the custom port. Typical usage, here with the custom ports 1080
and 10443
:
docker run -d -p 1080:1080 -p 10443:10443 -e HTTP_PORT=1080 -e HTTPS_PORT=10443 -v /var/run/docker.sock:/tmp/docker.sock:ro nginxproxy/nginx-proxy
With the addition of overlay networking in Docker 1.9, your nginx-proxy
container may need to connect to backend containers on multiple networks. By default, if you don't pass the --net
flag when your nginx-proxy
container is created, it will only be attached to the default bridge
network. This means that it will not be able to connect to containers on networks other than bridge
.
If you want your nginx-proxy
container to be attached to a different network, you must pass the --net=my-network
option in your docker create
or docker run
command. At the time of this writing, only a single network can be specified at container creation time. To attach to other networks, you can use the docker network connect
command after your container is created:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro \
--name my-nginx-proxy --net my-network nginxproxy/nginx-proxy
docker network connect my-other-network my-nginx-proxy
In this example, the my-nginx-proxy
container will be connected to my-network
and my-other-network
and will be able to proxy to other containers attached to those networks.
nginx-proxy
is compatible with containers using Docker's host networking, both with the proxy connected to one or more bridge network (default or user created) or running in host network mode itself.
Proxyed containers running in host network mode must use the VIRTUAL_PORT
environment variable, as this is the only way for nginx-proxy
to get the correct port (or a port at all) for those containers.
If you allow traffic from the public internet to access your nginx-proxy
container, you may want to restrict some containers to the internal network only, so they cannot be accessed from the public internet. On containers that should be restricted to the internal network, you should set the environment variable NETWORK_ACCESS=internal
. By default, the internal network is defined as 127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
. To change the list of networks considered internal, mount a file on the nginx-proxy
at /etc/nginx/network_internal.conf
with these contents, edited to suit your needs:
# These networks are considered "internal"
allow 127.0.0.0/8;
allow 10.0.0.0/8;
allow 192.168.0.0/16;
allow 172.16.0.0/12;
# Traffic from all other networks will be rejected
deny all;
When internal-only access is enabled, external clients will be denied with an HTTP 403 Forbidden
[!NOTE] If there is a load-balancer / reverse proxy in front of
nginx-proxy
that hides the client IP (example: AWS Application/Elastic Load Balancer), you will need to use the nginxrealip
module (already installed) to extract the client's IP from the HTTP request headers. Please see the nginx realip module configuration for more details. This configuration can be added to a new config file and mounted in/etc/nginx/conf.d/
.
If you would like the reverse proxy to connect to your backend using HTTPS instead of HTTP, set VIRTUAL_PROTO=https
on the backend container.
[!NOTE] If you use
VIRTUAL_PROTO=https
and your backend container exposes port 80 and 443,nginx-proxy
will use HTTPS on port 80. This is almost certainly not what you want, so you should also includeVIRTUAL_PORT=443
.
If you would like to connect to uWSGI backend, set VIRTUAL_PROTO=uwsgi
on the backend container. Your backend container should then listen on a port rather than a socket and expose that port.
If you would like to connect to FastCGI backend, set VIRTUAL_PROTO=fastcgi
on the backend container. Your backend container should then listen on a port rather than a socket and expose that port.
If you use fastcgi,you can set VIRTUAL_ROOT=xxx
for your root directory
If you have multiple containers with the same VIRTUAL_HOST
and VIRTUAL_PATH
settings, nginx will spread the load across all of them. To change the load balancing algorithm from nginx's default (round-robin), set the com.github.nginx-proxy.nginx-proxy.loadbalance
label on one or more of your application containers to the desired load balancing directive. See the ngx_http_upstream_module
documentation for available directives.
[!NOTE]
- Don't forget the terminating semicolon (
;
).- If you are using Docker Compose, remember to escape any dollar sign (
$
) characters ($
becomes$$
).
Docker Compose example:
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
HTTPS_METHOD: nohttps
myapp:
image: jwilder/whoami
expose:
- "8000"
environment:
VIRTUAL_HOST: myapp.example
VIRTUAL_PORT: "8000"
labels:
com.github.nginx-proxy.nginx-proxy.loadbalance: "hash $$remote_addr;"
deploy:
replicas: 4
By default nginx-proxy
will enable HTTP keep-alive between itself and backend server(s) and set the maximum number of idle connections to twice the number of servers listed in the corresponding upstream{}
block, per nginx recommendation. To manually set the maximum number of idle connections or disable HTTP keep-alive entirely, use the com.github.nginx-proxy.nginx-proxy.keepalive
label on the server's container (setting it to disabled
will disable HTTP keep-alive).
See the nginx keepalive documentation and the Docker label documentation for details.
In order to be able to secure your virtual host, you have to create a file named as its equivalent VIRTUAL_HOST
variable (or if using a regex VIRTUAL_HOST
, as the sha1 hash of the regex) in directory
/etc/nginx/htpasswd/{$VIRTUAL_HOST}
docker run -d -p 80:80 -p 443:443 \
-v /path/to/htpasswd:/etc/nginx/htpasswd \
-v /path/to/certs:/etc/nginx/certs \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
nginxproxy/nginx-proxy
If you want to define basic authentication for a VIRTUAL_PATH
, you have to create a file named as /etc/nginx/htpasswd/${VIRTUAL_HOST}_${VIRTUAL_PATH_SHA1}
(where $VIRTUAL_PATH_SHA1
is the SHA1 hash for the virtual path, you can use any SHA1 online generator to calculate it).
You'll need apache2-utils on the machine where you plan to create the htpasswd file. Follow these instructions
The default nginx access log format is
$host $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$upstream_addr"
If you want to use a custom access log format, you can set LOG_FORMAT=xxx
on the proxy container.
With docker compose take care to escape the $
character with $$
to avoid variable interpolation. Example: $remote_addr
becomes $$remote_addr
.
If you want access logs in JSON format, you can set LOG_JSON=true
. This will correctly set the escape character to json
and the log format to :
{
"time_local": "$time_iso8601",
"client_ip": "$http_x_forwarded_for",
"remote_addr": "$remote_addr",
"request": "$request",
"status": "$status",
"body_bytes_sent": "$body_bytes_sent",
"request_time": "$request_time",
"upstream_response_time": "$upstream_response_time",
"upstream_addr": "$upstream_addr",
"http_referrer": "$http_referer",
"http_user_agent": "$http_user_agent",
"request_id": "$request_id"
}
If you want to manually set nginx log_format
's escape
, set the LOG_FORMAT_ESCAPE
variable to a value supported by nginx.
To disable nginx access logs entirely, set the DISABLE_ACCESS_LOGS
environment variable to any value.
To remove colors from the container log output, set the NO_COLOR
environment variable to any value other than an empty string on the nginx-proxy container.
docker run --detach \
--publish 80:80 \
--env NO_COLOR=1 \
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
nginxproxy/nginx-proxy
SSL is supported using single host, wildcard and SAN certificates using naming conventions for certificates or optionally specifying a cert name as an environment variable.
To enable SSL:
docker run -d -p 80:80 -p 443:443 -v /path/to/certs:/etc/nginx/certs -v /var/run/docker.sock:/tmp/docker.sock:ro nginxproxy/nginx-proxy
The contents of /path/to/certs
should contain the certificates and private keys for any virtual hosts in use. The certificate and keys should be named after the virtual host with a .crt
and .key
extension. For example, a container with VIRTUAL_HOST=foo.bar.com
should have a foo.bar.com.crt
and foo.bar.com.key
file in the certs directory.
If you are running the container in a virtualized environment (Hyper-V, VirtualBox, etc...), /path/to/certs must exist in that environment or be made accessible to that environment. By default, Docker is not able to mount directories on the host machine to containers running in a virtual machine.
acme-companion is a lightweight companion container for the nginx-proxy. It allows the automated creation/renewal of SSL certificates using the ACME protocol.
By default nginx-proxy generates location blocks to handle ACME HTTP Challenge. This behavior can be changed with environment variable ACME_HTTP_CHALLENGE_LOCATION
. It accepts these values:
true
: default behavior, handle ACME HTTP Challenge in all cases.false
: do not handle ACME HTTP Challenge at all.legacy
: legacy behavior for compatibility with older (<= 2.3
) versions of acme-companion, only handle ACME HTTP challenge when there is a certificate for the domain and HTTPS_METHOD=redirect
.RFC7919 groups with key lengths of 2048, 3072, and 4096 bits are provided by nginx-proxy
. The ENV DHPARAM_BITS
can be set to 2048
or 3072
to change from the default 4096-bit key. The DH key file will be located in the container at /etc/nginx/dhparam/dhparam.pem
. Mounting a different dhparam.pem
file at that location will override the RFC7919 key.
To use custom dhparam.pem
files per-virtual-host, the files should be named after the virtual host with a dhparam
suffix and .pem
extension. For example, a container with VIRTUAL_HOST=foo.bar.com
should have a foo.bar.com.dhparam.pem
file in the /etc/nginx/certs
directory.
[!WARNING] The default generated
dhparam.pem
key is 4096 bits for A+ security. Some older clients (like Java 6 and 7) do not support DH keys with over 1024 bits. In order to support these clients, you must provide your owndhparam.pem
.
In the separate container setup, no pre-generated key will be available and neither the nginxproxy/docker-gen image, nor the offical nginx image will provide one. If you still want A+ security in a separate container setup, you should mount an RFC7919 DH key file to the nginx container at /etc/nginx/dhparam/dhparam.pem
.
Set DHPARAM_SKIP
environment variable to true
to disable using default Diffie-Hellman parameters. The default value is false
.
docker run -e DHPARAM_SKIP=true ....
Wildcard certificates and keys should be named after the parent domain name with a .crt
and .key
extension. For example:
VIRTUAL_HOST=foo.bar.com
would use cert name bar.com.crt
and bar.com.key
if foo.bar.com.crt
and foo.bar.com.key
are not availableVIRTUAL_HOST=sub.foo.bar.com
use cert name foo.bar.com.crt
and foo.bar.com.key
if sub.foo.bar.com.crt
and sub.foo.bar.com.key
are not available, but won't use bar.com.crt
and bar.com.key
.If your certificate(s) supports multiple domain names, you can start a container with CERT_NAME=<name>
to identify the certificate to be used. For example, a certificate for *.foo.com
and *.bar.com
could be named shared.crt
and shared.key
. A container running with VIRTUAL_HOST=foo.bar.com
and CERT_NAME=shared
will then use this shared cert.
To enable OCSP Stapling for a domain, nginx-proxy
looks for a PEM certificate containing the trusted CA certificate chain at /etc/nginx/certs/<domain>.chain.pem
, where <domain>
is the domain name in the VIRTUAL_HOST
directive. The format of this file is a concatenation of the public PEM CA certificates starting with the intermediate CA most near the SSL certificate, down to the root CA. This is often referred to as the "SSL Certificate Chain". If found, this filename is passed to the NGINX ssl_trusted_certificate
directive and OCSP Stapling is enabled.
The default SSL cipher configuration is based on the Mozilla intermediate profile version 5.0 which should provide compatibility with clients back to Firefox 27, Android 4.4.2, Chrome 31, Edge, IE 11 on Windows 7, Java 8u31, OpenSSL 1.0.1, Opera 20, and Safari 9. Note that the DES-based TLS ciphers were removed for security. The configuration also enables HSTS, PFS, OCSP stapling and SSL session caches. Currently TLS 1.2 and 1.3 are supported.
If you don't require backward compatibility, you can use the Mozilla modern profile profile instead by including the environment variable SSL_POLICY=Mozilla-Modern
to the nginx-proxy container or to your container. This profile is compatible with clients back to Firefox 63, Android 10.0, Chrome 70, Edge 75, Java 11, OpenSSL 1.1.1, Opera 57, and Safari 12.1.
[!NOTE] This profile is not compatible with any version of Internet Explorer.
Complete list of policies available through the SSL_POLICY
environment variable, including the AWS ELB Security Policies and AWS Classic ELB security policies:
Mozilla-Modern
Mozilla-Intermediate
Mozilla-Old
(this policy should use a 1024 bits DH key for compatibility but this container provides a 4096 bits key. The Diffie-Hellman Groups section details different methods of bypassing this, either globally or per virtual-host.)
AWS-TLS13-1-3-2021-06
AWS-TLS13-1-2-2021-06
AWS-TLS13-1-2-Res-2021-06
AWS-TLS13-1-2-Ext1-2021-06
AWS-TLS13-1-2-Ext2-2021-06
AWS-TLS13-1-1-2021-06
AWS-TLS13-1-0-2021-06
AWS-FS-1-2-Res-2020-10
AWS-FS-1-2-Res-2019-08
AWS-FS-1-2-2019-08
AWS-FS-1-1-2019-08
AWS-FS-2018-06
AWS-TLS-1-2-Ext-2018-06
AWS-TLS-1-2-2017-01
AWS-TLS-1-1-2017-01
AWS-2016-08
AWS-2015-05
AWS-2015-03
AWS-2015-02
[!NOTE] The filenames of extra configuration files affect the order in which configuration is applied. nginx reads configuration from
/etc/nginx/conf.d
directory in alphabetical order. Note that the configuration managed by nginx-proxy is placed at/etc/nginx/conf.d/default.conf
.
To add settings on a per-VIRTUAL_HOST
basis, add your configuration file under /etc/nginx/vhost.d
. Unlike in the proxy-wide case, which allows multiple config files with any name ending in .conf
, the per-VIRTUAL_HOST
file must be named exactly after the VIRTUAL_HOST
, or if VIRTUAL_HOST
is a regex, after the sha1 hash of the regex.
In order to allow virtual hosts to be dynamically configured as backends are added and removed, it makes the most sense to mount an external directory as /etc/nginx/vhost.d
as opposed to using derived images or mounting individual configuration files.
For example, if you have a virtual host named app.example.com
, you could provide a custom configuration for that host as follows:
create your virtual host config file:
# content of the custom-vhost-config.conf file
client_max_body_size 100m;
mount it to /etc/nginx/vhost.d/app.example.com
:
Docker CLI
docker run --detach \
--name nginx-proxy \
--publish 80:80 \
--publish 443:443 \
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
--volume /path/to/custom-vhost-config.conf:/etc/nginx/vhost.d/app.example.com:ro \
nginxproxy/nginx-proxy
If you are using multiple hostnames for a single container (e.g. VIRTUAL_HOST=example.com,www.example.com
), the virtual host configuration file must exist for each hostname:
If you want most of your virtual hosts to use a default single configuration and then override on a few specific ones, add those settings to the /etc/nginx/vhost.d/default
file. This file will be used on any virtual host which does not have a per-VIRTUAL_HOST file associated with it.
To add settings to the "location" block on a per-VIRTUAL_HOST
basis, add your configuration file under /etc/nginx/vhost.d
just like the per-VIRTUAL_HOST
section except with the suffix _location
(like this section, if your VIRTUAl_HOST
is a regex, use the sha1 hash of the regex instead, with the suffix _location
appended).
For example, if you have a virtual host named app.example.com
and you have configured a proxy_cache my-cache
in another custom file, you could tell it to use a proxy cache as follows:
create your virtual host location config file:
# content of the custom-vhost-location-config.conf file
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
mount it to /etc/nginx/vhost.d/app.example.com_location
:
If you are using multiple hostnames for a single container (e.g. VIRTUAL_HOST=example.com,www.example.com
), the virtual host configuration file must exist for each hostname:
If you want most of your virtual hosts to use a default single location
block configuration and then override on a few specific ones, add those settings to the /etc/nginx/vhost.d/default_location
file. This file will be used on any virtual host which does not have a Per-VIRTUAL_HOST location file associated with it.
location
blocksThe ${VIRTUAL_HOST}_${PATH_HASH}_location
, ${VIRTUAL_HOST}_location
, and default_location
files documented above make it possible to augment the generated location
block(s) in a virtual host. In some circumstances, you may need to completely override the location
block for a particular combination of virtual host and path. To do this, create a file whose name follows this pattern:
/etc/nginx/vhost.d/${VIRTUAL_HOST}_${PATH_HASH}_location_override
where ${VIRTUAL_HOST}
is the name of the virtual host (the VIRTUAL_HOST
environment variable), or the sha1 hash of VIRTUAL_HOST
when it's a regex, and ${PATH_HASH}
is the SHA-1 hash of the path, as described above.
For convenience, the _${PATH_HASH}
part can be omitted if the path is /
:
/etc/nginx/vhost.d/${VIRTUAL_HOST}_location_override
When an override file exists, the location
block that is normally created by nginx-proxy
is not generated. Instead, the override file is included via the nginx include
directive.
You are responsible for providing a suitable location
block in your override file as required for your service. By default, nginx-proxy
uses the VIRTUAL_HOST
name as the upstream name for your application's Docker container; see here for details. As an example, if your container has a VIRTUAL_HOST
value of app.example.com
, then to override the location block for /
you would create a file named /etc/nginx/vhost.d/app.example.com_location_override
that contains something like this:
location / {
proxy_pass http://app.example.com;
}
server_tokens
configurationPer virtual-host servers_tokens
directive can be configured by passing appropriate value to the SERVER_TOKENS
environment variable. Please see the nginx http_core module configuration for more details.
To override the default error page displayed on 50x errors, mount your custom HTML error page inside the container at /usr/share/nginx/html/errors/50x.html
:
docker run --detach \
--name nginx-proxy \
--publish 80:80 \
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
--volume /path/to/error.html:/usr/share/nginx/html/errors/50x.html:ro \
nginxproxy/nginx-proxy
[!NOTE] This will not replace your own services error pages.
If you want to proxy non-HTTP traffic, you can use nginx's stream module. Write a configuration file and mount it inside /etc/nginx/toplevel.conf.d
.
# stream.conf
stream {
upstream stream_backend {
server backend1.example.com:12345;
server backend2.example.com:12345;
server backend3.example.com:12346;
# ...
}
server {
listen 12345;
#TCP traffic will be forwarded to the "stream_backend" upstream group
proxy_pass stream_backend;
}
server {
listen 12346;
#TCP traffic will be forwarded to the specified server
proxy_pass backend.example.com:12346;
}
upstream dns_servers {
server 192.168.136.130:53;
server 192.168.136.131:53;
# ...
}
server {
listen 53 udp;
#UDP traffic will be forwarded to the "dns_servers" upstream group
proxy_pass dns_servers;
}
# ...
}
docker run --detach \
--name nginx-proxy \
--publish 80:80 \
--publish 12345:12345 \
--publish 12346:12346 \
--publish 53:53:udp \
--volume /var/run/docker.sock:/tmp/docker.sock:ro \
--volume ./stream.conf:/etc/nginx/toplevel.conf.d/stream.conf:ro \
nginxproxy/nginx-proxy
[!NOTE] TCP and UDP stream are not core features of nginx-proxy, so the above is provided as an example only, without any guarantee.
By default the nginx configuration upstream
blocks will use this block's corresponding hostname as a predictable name. However, this can cause issues in some setups (see this issue). In those cases you might want to switch to SHA1 names for the upstream
blocks by setting the SHA1_UPSTREAM_NAME
environment variable to true
on the nginx-proxy container.
[!NOTE] Using regular expressions in
VIRTUAL_HOST
will always result in a correspondingupstream
block with an SHA1 name.
nginx-proxy can also be run as two separate containers using the nginxproxy/docker-gen image and the official nginx image.
You may want to do this to prevent having the docker socket bound to a publicly exposed container service.
You can demo this pattern with docker compose:
docker compose --file docker-compose-separate-containers.yml up
curl -H "Host: whoami.example" localhost
Example output:
I'm 5b129ab83266
To run nginx proxy as a separate container you'll need to have nginx.tmpl on your host system.
First start nginx with a volume:
docker run -d -p 80:80 --name nginx -v /tmp/nginx:/etc/nginx/conf.d -t nginx
Then start the docker-gen container with the shared volume and template:
docker run --volumes-from nginx \
-v /var/run/docker.sock:/tmp/docker.sock:ro \
-v $(pwd):/etc/docker-gen/templates \
-t nginxproxy/docker-gen -notify-sighup nginx -watch /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
Finally, start your containers with VIRTUAL_HOST
environment variables.
docker run -e VIRTUAL_HOST=foo.bar.com ...
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
whoami:
image: jwilder/whoami
expose:
- "8000"
environment:
- VIRTUAL_HOST=whoami.example
- VIRTUAL_PORT=8000
docker compose up
curl -H "Host: whoami.example" localhost
Example output:
I'm 5b129ab83266
If you can't access your VIRTUAL_HOST
, inspect the generated nginx configuration:
docker exec <nginx-proxy-instance> nginx -T
Pay attention to the upstream
definition blocks, which should look like this:
# foo.example.com
upstream foo.example.com {
## Can be connected with "my_network" network
# Exposed ports: [{ <exposed_port1> tcp } { <exposed_port2> tcp } ...]
# Default virtual port: <exposed_port|80>
# VIRTUAL_PORT: <VIRTUAL_PORT>
# foo
server 172.18.0.9:<Port>;
# Fallback entry
server 127.0.0.1 down;
}
The effective Port
is retrieved by order of precedence:
VIRTUAL_PORT
environment variableThe debug endpoint can be enabled:
DEBUG_ENDPOINT
environment variable to true
on the nginx-proxy container.com.github.nginx-proxy.nginx-proxy.debug-endpoint
label to true
on a proxied container.Enabling it will expose the endpoint at <your.domain.tld>/nginx-proxy-debug
.
Querying the debug endpoint will show the global config, along with the virtual host and per path configs in JSON format.
services:
nginx-proxy:
image: nginxproxy/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
environment:
DEBUG_ENDPOINT: "true"
test:
image: nginx
environment:
VIRTUAL_HOST: test.nginx-proxy.tld
(on the CLI, using jq
to format the output of curl
is recommended)
curl -s -H "Host: test.nginx-proxy.tld" localhost/nginx-proxy-debug | jq
{
"global": {
"acme_http_challenge": "true",
"default_cert_ok": false,
"default_host": null,
"default_root_response": "404",
"enable_access_log": true,
"enable_debug_endpoint": "true",
"enable_http2": "true",
"enable_http3": "false",
"enable_http_on_missing_cert": "true",
"enable_ipv6": false,
"enable_json_logs": false,
"external_http_port": "80",
"external_https_port": "443",
"hsts": "max-age=31536000",
"https_method": "redirect",
"log_format": null,
"log_format_escape": null,
"nginx_proxy_version": "1.7.0",
"resolvers": "127.0.0.11",
"sha1_upstream_name": false,
"ssl_policy": "Mozilla-Intermediate",
"trust_downstream_proxy": true
},
"request": {
"host": "test.nginx-proxy.tld",
"http2": "",
"http3": "",
"https": "",
"ssl_cipher": "",
"ssl_protocol": ""
},
"vhost": {
"acme_http_challenge_enabled": true,
"acme_http_challenge_legacy": false,
"cert": "",
"cert_ok": false,
"default": false,
"enable_debug_endpoint": true,
"hostname": "test.nginx-proxy.tld",
"hsts": "max-age=31536000",
"http2_enabled": true,
"http3_enabled": false,
"https_method": "noredirect",
"is_regexp": false,
"paths": {
"/": {
"dest": "",
"keepalive": "disabled",
"network_tag": "external",
"ports": {
"legacy": [
{
"Name": "wip-test-1"
}
]
},
"proto": "http",
"upstream": "test.nginx-proxy.tld"
}
},
"server_tokens": "",
"ssl_policy": "",
"upstream_name": "test.nginx-proxy.tld",
"vhost_root": "/var/www/public"
}
}
[!WARNING] Please be aware that the debug endpoint work by rendering the JSON response straight to the nginx configuration in plaintext. nginx has an upper limit on the size of the configuration files it can parse, so only activate it when needed, and preferably on a per container basis if your setup has a large number of virtual hosts.
Before submitting pull requests or issues, please check github to make sure an existing issue or pull request is not already open.
To run tests, you just need to run the command below:
make test
This commands run tests on two variants of the nginx-proxy docker image: Debian and Alpine.
You can run the tests for each of these images with their respective commands:
make test-debian
make test-alpine
You can learn more about how the test suite works and how to write new tests in the test/README.md file.