How to fix upstream timed out (110: Connection timed out) error in Nginx

When you come across "Upstream timed out (110: Connection timed out)" error in your Nginx log:

[error] upstream timed out (110: Connection timed out) while reading response header from upstream, 
client:, server:, request: "GET / HTTP/1.1", upstream: "", host: "", referrer: "requested_url"

That means it takes your web server more than 60 seconds to respond. To solve this problem in nginx.conf change the value of proxy_read_timeout directive. This directive determines how long nginx will wait to get the response to a request. By default it's 60 seconds. Change it to 300 seconds:

server {
    listen       80;
    location / {
        proxy_read_timeout 300;

This should fix the problem.

  1. beer
    2012-10-09 16:52:08
    you crazy?  
    5minuts time out o_O 
  1. Sergey
    2012-10-09 16:53:31
    Why not?
  1. yoba
    2012-10-09 16:54:17
    nice advice thanks, now my apaches are allways in "server reached MaxClients"
  1. Sergey
    2012-10-09 16:58:43
    Then you probably have a problem on application side and you need to fix this instead of tuning timeouts.
    You only need to touch timeouts if you know that your application needs it (i.e. uploading huge files or launching heavy processing scripts/parsers, etc)
  1. 2012-11-26 01:25:23
    Why would you tie up nginx with a client request for 5 min!!  You're just asking for an easy way to DDoS your site.  Set sane timeouts.  If your server doesn't respond in 5 seconds, you have a much bigger issue.
  1. unixowl
    2012-11-26 05:19:43
    The previous comment answers to your question
  1. recarv
    2013-04-27 18:01:18
    Or make it location specific timeout if you know that one URL needs longer to process, but don't put that in the root location block.
    For example - we have a reporting url that takes a long time - so we set this url specifically with a different timeout - that way the DDoS is limited (the attacker would have to know to hit that url).
  1. 2013-06-12 22:51:28
    Call me crazy, but the ddos talk is unwarranted.  I can send 10k connections to your server in 30s just as easily as I can send them in 300s.
    If you're worried about dns, you should be looking at a thorttling setting that makes sense.
  1. Guest
    2014-08-04 19:09:53
    this "solution" is almost as awesome as the one i saw  saying the way to fix php errors is to turn off error logging
  1. Trex
    2014-09-22 16:32:51
    Actually this solution was useful to me as I'm using Nginx only as an http load balancer in front of a bunch of internal corporate J2EE interactive reporting apps that can take quite some time to process their requests.
  1. Yuan Yuan
    2014-10-03 10:18:04
    Thank you all upstairs, you all spoke out what I want to ask
  1. Ivan
    2014-11-04 22:10:43
    Does not work for me.
  1. Jimbo
    2014-11-23 13:19:05
    Does not work for me either.
  1. boksi
    2014-12-16 07:54:50
    Does not work for me to.
  1. garlunk
    2014-12-22 16:05:29
    I frankly can't believe the first search result from Google and default answer on the "howtounix" site is to raise the timeout to 5 minutes.  How cocky does a developer have to be to think they can tie up a browser connection for five minutes, let alone tying up the resources on the proxy server itself?  If you need any more than five seconds, you need to spend some time learning ajax or websockets.  Please take this ridiculous advice down!!!
  1. Sergey
    2014-12-22 17:03:13
    And how would ajax help here? It will still need to make same kind of http request and it will still die by timeout if you don't set it up as described in the article. The solution offered here is for specific use cases and it have a right to live.
  1. Christian Rauchenwald
    2015-01-02 15:46:43
    I can just agree. Setting the timeout to 5 minutest can not be the solution. 
  1. Poorva
    2015-01-09 14:59:50
    Setting timeout to 5 minutes is not a right solution ! agreed
  1. ronanm
    2015-01-12 02:41:56
    Fixed my problem thanks. Got a large config download to a rubymotion ipad app client that was occasionally taking longer than 60 secs. Upped it to 90secs. Sorted (for now of course) Thanks.
  1. Jan
    2015-02-10 04:33:51
    Why not set the timeout to several hours. I'm sure any customers / site visitors will really be happy about that, and very patient to wait in fromt of an empty screen.
    Really, why bother with finding issues on the application side when increasing timeout is all you need?
    <sarcasm mode off>
  1. Sergey
    2015-02-10 06:52:55
    Because there may be no issue, but some heavy CSV import script in the backend which can take more time to run than usual? Really, you should think wider.
  1. Mathiau
    2015-02-24 02:45:53
    Ya my thoughts exactly, there are times with massive queries and reports that get pulled once ina  blue moon that more than 5 seconds is needed.
    Not every company has endless money to throw at faster and faster hardware all day long and hire teams of developers to redo old slow systems.
  1. Autists galore
    2015-03-04 17:10:07
    Problem: Script times out
    Solution: Increase allowed timeout value
    Just because you are developing some rinkydink website that only needs a half a second to pull 2 queries from a database and display it, on massively overpowered hardware... doesn't mean that people aren't developing massively complex scripts for single usage runs, where this is the correct answer for the given problem.
    There isn't always an underlying problem to 'find'.
    In my case to test fully loaded performance of low power hardware.
  1. 2015-05-06 15:26:57
    Love how people ("web developers" i assume) says that this is the wrong solution assumes that there are people waiting in front of a browser window for 5 minutes to let the request complete.
    There are perfectly legitimate uses for this, in our case a backend worker for an API that takes more than 60s to complete and that cant be called asynchronously since it has to act on the response.
  1. Mike M
    2015-05-29 14:10:45
    Agreed with the previous comment... everybody here blasting this solution as if it were the devil are making assumptions about what the use case is, exactly.
    I have a client that needs to transfer a 10Mb file once per day from an internal server to my server for data processing, via NginX & PHP. The upload itself was taking over 60 seconds with their poor connection speed, and the entire operation is a background task that will never effect a website user.
    This solution was perfect for me, and I dare any one of you naysaying chumps tell me why that's wrong.
  1. Viking Robin
    2015-08-20 03:36:26
    I fix this error with this solution. the upload request takes too much time.
    But I have to say, DDos is not unwarranted. I had set limit_conn and limit_rate to protect the server, but with a looooog timeout, the hacker can use fewer post requests to do the job , it even won't trigger the limit.
    Now I have to handle the DDos problem one time pre week
  1. V Subedi
    2015-09-03 21:04:24
    It's just buying more time and slowing things down. It's not a fix. 
  1. zar
    2015-09-16 20:35:34
    I'm using an enterprise grade application, so i need 5 minutes.  Òó
  1. PrestaShop User
    2015-10-18 22:47:48
    It was to help. Thank you VERY much.
    One of PrestaShop modules (copying data to other site) need in my practice 330 seconds to successful work.
  1. Patrick Duc
    2016-02-04 15:52:33
    This is a necessary solution in some cases. For instance, when huge files (1Gb +) are to be downloaded, and the hardware is not the most effective one when dealing with disk IO.
    I fully agree with the comment saying that not all HTTP requests are performed by humans waiting in front of a browser. Please take in to account that there web services having to deal with request of a very long duration.
  1. Razee
    2016-11-23 10:37:11
    The people who say 5 minute timeout is never the answer assume too much. Well, it is almost always a bad idea if you're talking about an internet facing website that is open and possibly discoverable to the world. Well, almost. 
    Lets talk about an intranet setup, where nginx is in front of some reporting servers. or servers that do some heavy image processing and editing, and return the results / status. You could spend time writing more asynchronous programs, websockets, and everything that's good. But I'd rather spend that time on the internet facing servers. 
  1. Rumesh
    2017-01-13 05:27:17
    This will solve the above issue.
        proxy_http_version 1.1;
        proxy_set_header Connection "";
  1. nikk wong
    2016-06-15 08:54:01
    this is the stupidest article i've ever read
  1. Georgi
    2016-07-19 09:22:33
    The fix is Working :) But it is not fixing the application, just this error. What is causing this error is really another question. How to fix that is another question too. Every other question raises another questions - What is the app, What is the hardware, What is the request... What, Who, Where... That's why this answer is correct for this question. Off course the OP needed to add at the end, that this is not fix for the real problem and the real fix depends on the real situation
  1. Shyam
    2016-07-21 13:49:34
    I got this error -- upstream timed out (110: Connection timed out) while reading response header from upstream, with meteor application. it needs to establish socket connections and meteor architechture downloads static contents/images/js files and data to client's memory in first request to server. this needs more that 60 seconds and this solutions helps me a lot and even suggested on most of the other sites. Thanks

Got a comment?

captcha =


  1. System (20)
    1. FreeBSD (5)
    2. Linux (9)
  2. Email (2)
  3. DNS (2)
  4. Databases (1)
  5. WebServer (27)
Copyright © 2024 HowToUnix - *nix Howtos and Tutorials
All Rights Reserved.