How to fix upstream timed out (110: Connection timed out) error in Nginx

When you come across "Upstream timed out (110: Connection timed out)" error in your Nginx log:

[error] upstream timed out (110: Connection timed out) while reading response header from upstream, 
client: xxx.xxx.xxx.xxx, server: howtounix.info, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080", host: "howtounix.info", referrer: "requested_url"

That means it takes your web server more than 60 seconds to respond. To solve this problem in nginx.conf change the value of proxy_read_timeout directive. This directive determines how long nginx will wait to get the response to a request. By default it's 60 seconds. Change it to 300 seconds:

server {
    listen       80;
    server_name  howtounix.info;
 
    location / {
        ...
        proxy_read_timeout 300;
        ...
    }
    ...
}

This should fix the problem.







  1. beer
    2012-10-09 16:52:08
    you crazy?  
    5minuts time out o_O 
  1. Sergey
    2012-10-09 16:53:31
    Why not?
  1. yoba
    2012-10-09 16:54:17
    nice advice thanks, now my apaches are allways in "server reached MaxClients"
  1. Sergey
    2012-10-09 16:58:43
    Then you probably have a problem on application side and you need to fix this instead of tuning timeouts.
    You only need to touch timeouts if you know that your application needs it (i.e. uploading huge files or launching heavy processing scripts/parsers, etc)
  1. 2012-11-26 01:25:23
    Why would you tie up nginx with a client request for 5 min!!  You're just asking for an easy way to DDoS your site.  Set sane timeouts.  If your server doesn't respond in 5 seconds, you have a much bigger issue.
  1. unixowl
    2012-11-26 05:19:43
    The previous comment answers to your question
  1. recarv
    2013-04-27 18:01:18
    Or make it location specific timeout if you know that one URL needs longer to process, but don't put that in the root location block.
    
    For example - we have a reporting url that takes a long time - so we set this url specifically with a different timeout - that way the DDoS is limited (the attacker would have to know to hit that url).
  1. 2013-06-12 22:51:28
    Call me crazy, but the ddos talk is unwarranted.  I can send 10k connections to your server in 30s just as easily as I can send them in 300s.
    
    If you're worried about dns, you should be looking at a thorttling setting that makes sense.
  1. Guest
    2014-08-04 19:09:53
    this "solution" is almost as awesome as the one i saw  saying the way to fix php errors is to turn off error logging
  1. Trex
    2014-09-22 16:32:51
    Actually this solution was useful to me as I'm using Nginx only as an http load balancer in front of a bunch of internal corporate J2EE interactive reporting apps that can take quite some time to process their requests.
  1. Yuan Yuan
    2014-10-03 10:18:04
    Thank you all upstairs, you all spoke out what I want to ask
  1. Ivan
    2014-11-04 22:10:43
    Does not work for me.
  1. Jimbo
    2014-11-23 13:19:05
    Does not work for me either.
  1. boksi
    2014-12-16 07:54:50
    Does not work for me to.
  1. garlunk
    2014-12-22 16:05:29
    I frankly can't believe the first search result from Google and default answer on the "howtounix" site is to raise the timeout to 5 minutes.  How cocky does a developer have to be to think they can tie up a browser connection for five minutes, let alone tying up the resources on the proxy server itself?  If you need any more than five seconds, you need to spend some time learning ajax or websockets.  Please take this ridiculous advice down!!!
    
  1. Sergey
    2014-12-22 17:03:13
    And how would ajax help here? It will still need to make same kind of http request and it will still die by timeout if you don't set it up as described in the article. The solution offered here is for specific use cases and it have a right to live.
  1. Christian Rauchenwald
    2015-01-02 15:46:43
    I can just agree. Setting the timeout to 5 minutest can not be the solution. 
  1. Poorva
    2015-01-09 14:59:50
    Setting timeout to 5 minutes is not a right solution ! agreed
  1. ronanm
    2015-01-12 02:41:56
    Fixed my problem thanks. Got a large config download to a rubymotion ipad app client that was occasionally taking longer than 60 secs. Upped it to 90secs. Sorted (for now of course) Thanks.
  1. Jan
    2015-02-10 04:33:51
    Why not set the timeout to several hours. I'm sure any customers / site visitors will really be happy about that, and very patient to wait in fromt of an empty screen.
    
    Really, why bother with finding issues on the application side when increasing timeout is all you need?
    
    <sarcasm mode off>
  1. Sergey
    2015-02-10 06:52:55
    Because there may be no issue, but some heavy CSV import script in the backend which can take more time to run than usual? Really, you should think wider.
  1. Mathiau
    2015-02-24 02:45:53
    Ya my thoughts exactly, there are times with massive queries and reports that get pulled once ina  blue moon that more than 5 seconds is needed.
    
    Not every company has endless money to throw at faster and faster hardware all day long and hire teams of developers to redo old slow systems.
  1. Autists galore
    2015-03-04 17:10:07
    Problem: Script times out
    Solution: Increase allowed timeout value
    
    -
    Just because you are developing some rinkydink website that only needs a half a second to pull 2 queries from a database and display it, on massively overpowered hardware... doesn't mean that people aren't developing massively complex scripts for single usage runs, where this is the correct answer for the given problem.
    
    There isn't always an underlying problem to 'find'.
    
    In my case to test fully loaded performance of low power hardware.
  1. 2015-05-06 15:26:57
    Love how people ("web developers" i assume) says that this is the wrong solution assumes that there are people waiting in front of a browser window for 5 minutes to let the request complete.
    
    There are perfectly legitimate uses for this, in our case a backend worker for an API that takes more than 60s to complete and that cant be called asynchronously since it has to act on the response.
  1. Mike M
    2015-05-29 14:10:45
    Agreed with the previous comment... everybody here blasting this solution as if it were the devil are making assumptions about what the use case is, exactly.
    
    I have a client that needs to transfer a 10Mb file once per day from an internal server to my server for data processing, via NginX & PHP. The upload itself was taking over 60 seconds with their poor connection speed, and the entire operation is a background task that will never effect a website user.
    
    This solution was perfect for me, and I dare any one of you naysaying chumps tell me why that's wrong.

Got a comment?

captcha =

Categories

  1. System (20)
    1. FreeBSD (5)
    2. Linux (9)
  2. Email (2)
  3. DNS (2)
  4. Databases (1)
  5. WebServer (27)
 
Copyright © 2012-2015 HowToUnix - *nix Howtos and Tutorials
All Rights Reserved.