Nginx – Does nginx allow an upstream server to respond to and close a request before it has finished

nginxnode.jsstreaming

I have an image upload service that nginx proxies requests to. Everything works great. Sometimes though, the server already has the image the user is uploading. So, I want to respond early and close the connection.

After reading the headers and checking with the server, I call Node's response.end([data][, encoding][, callback]).

Nginx barfs and returns a blank response:

[error] 3831#0: *12879 readv() failed (104: Connection reset by peer) while reading upstream

My guess is that nginx assumes something bad happened in the upstream server, drops the client connection immediately without sending the upstream server's response.

Does anyone know how to properly respond to and close the client's connection when nginx is the proxy? I know this is possible to do: see: sending the response before the request was in

Here is the nginx conf file:

worker_processes 8; # the number of processors
worker_rlimit_nofile 128; # each connection needs 2 file handles

events {
  worker_connections 128; # two connections per end-user connection (proxy)
  multi_accept on;
  use kqueue;
}

http {
  sendfile on;
  tcp_nopush on; # attempt to send HTTP response head in one packet
  tcp_nodelay off; # Nagle algorithm, wait until we have the maximum amount of data the network can send at once
  keepalive_timeout 65s;

  include nginx.mime.types;
  default_type application/octet-stream;

  error_log /usr/local/var/log/nginx/error.log;
  log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

  gzip off;

}

upstream upload_service {
  server 127.0.0.1:1337 fail_timeout=0;
  keepalive 64;
}

location /api/upload_service/ {
  # setup proxy to UpNode
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Proto $scheme;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy true;
  proxy_set_header Connection "";
  proxy_pass http://upload_service;

  # The timeout is set only between two successive read operations
  proxy_read_timeout 500s;
  # timeout for reading client request body, only for a period between two successive read   operations
  client_body_timeout 30s;
  # maximum allowed size of the client request body, specified in the "Content-Length"
  client_max_body_size 64M;
}

Best Answer

You don't mention what your clients are, however, this sounds like something that you would achieve with an expect header. In essence, the client sets an "Expect" header with a "100-continue" expectation. The client then waits for a 100 Continue response from the server, before sending its request body.

If the server does not want to receive the body, it can respond with a final status, and the client does not send the body.

This process is defined in RFC2616, section 8.2.3