2010年7月31日星期六

Re: nginx.conf issue?

If I removed any one of above five "if" tags, this error information
won't display. Maybe it's nginx's issue.?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,115070,115324#msg-115324


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Maxmind GeoIP City Module

Igor,

Sorry for resurrecting this old thread, but I've been working on
deploying Nginx, and I noticed that the GeoIP City module is missing a
couple of variables that are set by the equivalent mod_geoip module for
Apache.

The variables in question (as set by mod_geoip) are:

GEOIP_ADDR
GEOIP_REGION_NAME
GEOIP_DMA_CODE
GEOIP_AREA_CODE

I'm not certain what information you'd need to add these to the module,
but I'd love to see the GeoIP module to be as inclusive as the Apache
equivalent.

Let me know if there's any information you're lacking and I'll do my
best to dig it up.

Thanks!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3812,115209#msg-115209


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: DB Relay - NGiNX based open source project

Hi,

FreeTDS (note, I am the original author of FreeTDS) implements the
APIs used by either Sybase or Microsoft which means dblib, ctlib, and
ODBC. ctlib has support for asynchronous communications but does not
expose the socket file descriptor to the rest of the world, but since
it's open source that could be made to happen, it's just not in the
API presently.

I'm interested in your thinking on using pthreads to work around the
blocking API, I've been using a coprocess communicating over unix
domain sockets to do the same thing (pending conversion to
non-blocking module of course).

Server-side javascript is one idea, the V8 stuff looks very
interesting. The primary use was for intranet users where the
database credentials were supplied by users, so the credentials
problem was not an issue in that context.

Brian

2010/7/31 agentzh <agentzh@gmail.com>:
> On Fri, Jul 23, 2010 at 6:18 AM, Piotr Sikora <piotr.sikora@frickle.com> wrote:
>> It's great that you're also supporting MS SQL Server (via FreeTDS),
>
> Hmm, it'll be interesting to implement an ngx_freetds or ngx_sqlserver
> module using FreeTDS and the infrastructure used by both ngx_drizzle
> and ngx_postgres. I myself don't know how well the non-blocking
> support of FreeTDS is. Hopefully it does not suck as Oracle's OCI, or
> we'll have to use internal pthreads to work around those APIs that
> cannot be made non-blocking :)
>
> And as others have already pointed out, I'm scared to see db passwords
> and raw SQL query strings appear in JavaScript code on DB Relay's home
> page. Hopefully that piece of JS is intended to run on server side
> only (via the ngx_js module or V8 or something else) ;)
>
> Cheers,
> -agentzh
>
> P.S. I'm one of the authors of ngx_drizzle and ngx_postgres :)
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年7月30日星期五

Re: DB Relay - NGiNX based open source project

On Fri, Jul 23, 2010 at 6:18 AM, Piotr Sikora <piotr.sikora@frickle.com> wrote:
> It's great that you're also supporting MS SQL Server (via FreeTDS),

Hmm, it'll be interesting to implement an ngx_freetds or ngx_sqlserver
module using FreeTDS and the infrastructure used by both ngx_drizzle
and ngx_postgres. I myself don't know how well the non-blocking
support of FreeTDS is. Hopefully it does not suck as Oracle's OCI, or
we'll have to use internal pthreads to work around those APIs that
cannot be made non-blocking :)

And as others have already pointed out, I'm scared to see db passwords
and raw SQL query strings appear in JavaScript code on DB Relay's home
page. Hopefully that piece of JS is intended to run on server side
only (via the ngx_js module or V8 or something else) ;)

Cheers,
-agentzh

P.S. I'm one of the authors of ngx_drizzle and ngx_postgres :)

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

nginx.conf issue?

Hi, all.
If I add more then four "if" tags in the location part, when I run
command "killall nginx" or "/dir/nginx -s stop" to stop nginx process,
in the /var/log/messages file I had founded the following error
information: [code]Jul 31 10:49:14 localhost kernel: nginx[2314]:
segfault at 0000000000000000 rip 00002b0d618c8ce7 rsp 00007fffbcace590
error 4[/code]


My OS is Centos5.5. The example of "location" config is here:
[code]
location ~ .*\.(php|php5)?$
{
if ($args ~ "mosConfig_[a-zA-Z_]{1,21}(=|\%3D)") {
set $args "";
rewrite ^.*$ http://$host/index.php last;
return 403;
}
if ($args ~ "base64_encode.*\(.*\)") {
set $args "";
rewrite ^.*$ http://$host/index.php last;
return 403;
}
if ($args ~* "(\<|%3C).*script.*(\>|%3E)") {
set $args "";
rewrite ^.*$ http://$host/index.php last;
return 403;
}
if ($args ~ "GLOBALS(=|\[|\%[0-9A-Z]{0,2})") {
set $args "";
rewrite ^.*$ http://$host/index.php last;
return 403;
}
if ($args ~ "_REQUEST(=|\[|\%[0-9A-Z]{0,2})") {
set $args "";
rewrite ^.*$ http://$host/index.php last;
return 403;
}
.
.
.
}
[/code]

Kind Regards.
filebackup
07/31/2010

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,115070,115070#msg-115070


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Help Required: Host name modification.

Hello!

On Tue, Jul 27, 2010 at 04:58:49PM +0530, Sougata Pal. wrote:

> I have tried the following config, but it is generating error while
> restarting the server.
>
> *Error: *[emerg]: unknown "new_host" variable
>
> server {
> listen 80;
> server_name ~(?P<new_host>.*)\.xyz\.com;

Named captures in regular expression are supported in 0.8.25+.
You are probably using an older version.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: [NGINX] proxy_cache_use_stale updating and X-Accel-Expires=0

2010/7/30 Maxim Dounin <mdounin@mdounin.ru>
>
> Hello!
>
> On Wed, Jul 21, 2010 at 11:06:26AM +0200, Jérôme Loyet wrote:
>
> > I have a strange situation. I don't really know if it's a bug or a feature.
> >
> > I'm using nginx as a cached reverse proxy to apache/mod_php.
> >
> > I have the following (simplified) conf:
> >
> > proxy_cache_use_stale updating error timeout invalid_header http_500
> > http_502 http_503 http_504;
>
> [...]
>
> > Some pages returns the X-Accel-Expires=0 header to exclude a page from
> > the cache. It works great. But sometimes, nginx returns those page
> > with a 404 response without fetching the page to the backend server.
> > Those pages are marked as UPDATING in logs.
>
> Directive proxy_cache_use_stale set to "updating" instructs nginx
> to return stale cached response if one request to the same uri
> is already goes to backend.
>
> > How a page excluded from the cache can be wrongly (404 instead of 200)
> > served by the cache with an UPDATING status ?
>
> Looks like you happen to have stale document with 404 status in
> cache (e.g. cached during debugging).  It's not expunged from
> cache as it's active (i.e. frequently requested), and not updated
> in cache as you return X-Accel-Expires=0 in responses.
>
> But it's in cache, and once you get more than one simulteneous
> request to this document - nginx returns stale response from cache
> as it was said to.

the page always return X-Accel-Expires=0, should it not be recorded in
the cache ?

>
> Maxim Dounin
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Question about redirect?

Hello!

On Thu, Jul 22, 2010 at 12:08:16PM +0200, Slask Sklask wrote:

> Firstly I'm a newbie on both mailing lists AND nginx so please be nice to me
> :D
>
> I use nginx as a reverse proxy and it works flawlessly.
> But now I want to put basic authentication on certain ip addresses.
> IE.
>
> If I come from 10.0.0.0/24 network it should go through as usual.
> But if I come from 10.100.0.0/24 it should have basic authentication.
> Is there any way to accomplish this?


location / {
satisfy any;

allow 10.0.0.0/24;
deny all;

auth_basic "closed site";
auth_basic_user_file conf/htpasswd;
}

Most relevant link is:

http://wiki.nginx.org/NginxHttpCoreModule#satisfy_any

Directive "satisfy_any" was replaced by "satisfy" in 0.6.25.
Somebody have to fix wiki.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: [NGINX] proxy_cache_use_stale updating and X-Accel-Expires=0

Hello!

On Wed, Jul 21, 2010 at 11:06:26AM +0200, Jérôme Loyet wrote:

> I have a strange situation. I don't really know if it's a bug or a feature.
>
> I'm using nginx as a cached reverse proxy to apache/mod_php.
>
> I have the following (simplified) conf:
>
> proxy_cache_use_stale updating error timeout invalid_header http_500
> http_502 http_503 http_504;

[...]

> Some pages returns the X-Accel-Expires=0 header to exclude a page from
> the cache. It works great. But sometimes, nginx returns those page
> with a 404 response without fetching the page to the backend server.
> Those pages are marked as UPDATING in logs.

Directive proxy_cache_use_stale set to "updating" instructs nginx
to return stale cached response if one request to the same uri
is already goes to backend.

> How a page excluded from the cache can be wrongly (404 instead of 200)
> served by the cache with an UPDATING status ?

Looks like you happen to have stale document with 404 status in
cache (e.g. cached during debugging). It's not expunged from
cache as it's active (i.e. frequently requested), and not updated
in cache as you return X-Accel-Expires=0 in responses.

But it's in cache, and once you get more than one simulteneous
request to this document - nginx returns stale response from cache
as it was said to.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx Active & Writing Connections exponential growth.

Hello!

On Thu, Jul 29, 2010 at 02:10:23PM +0200, Antonio Jerez Guillén wrote:

> Hi there!
> First of all, I am a new user of the nginx mailing list so there is a short
> introduction of myself. My name is Antonio, and I have been working for a
> long time as Web Operations Engineer at Uptodown.com, VisualizeUs.com and
> other minor projects.
>
> We are running nginx in Uptodown.com for static content, without problems,
> and in VisualizeUs.com, since 2 years ago, for static and dynamic content
> without problems . But it is in visualizeUs where we are having some
> problems, since we updated nginx to nginx/0.7.67, under debian Lenny 32 bits
> (aws medium instance), with php-fcgi
>
> There is the munin output for nginx connections.
> http://i29.tinypic.com/21y781.png

1. Are you sure you are measuring connections here? Numbers looks
odd, one can't have 280.57 active connections (which is shown on
the first graph as "max"), it should be integer.

2. Are these connections correspond to real client and/or backend
connections (i.e. compare with netstat data)? Most likely no, but
numbers are suspiciously low.

3. What does nginx -V show? Do you have any third party modules /
patches? If yes - are you able to reproduce the problem without
them?

4. Are you able to reproduce the problem with 0.8.*?

> We noticed, an exponential growth for connections, better apreciated through
> the graphic.

Looks like linear one, not exponential, but anyway it looks like
socket leak.

I don't remember any socket leaks relevant to 0.7.* branch which
aren't fixed in 0.7.67. But things changed a lot in 0.8.* branch
and this particular problem may not even appear with new code.

> There is our nginx configuration:
>
> http://dpaste.com/hold/223096/

This doesn't contain full nginx config so it's mostly useless.

[...]

> As active connections grow, response time sligthly increases, so a nginx
> reboot is needed to get back to optimal response time.Nginx reboots are
> apreciated in munin graphic when it returns to normal values, in other case,
> exponential grows continues.

Could you please clarify what do you mean by "slightly increases"?
With epoll extra 1k idle connections shouldn't be noticeable.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: subrequest failover

Hello!

On Wed, Jul 28, 2010 at 05:36:11PM -0700, James Lyons wrote:

> I am trying to use nginx as reverse proxy to perform failover from 1
> critical service to another.
>
> When processing a request, it sends a request (http GET) to a backend
> (custom http server) using SubRequest methods. If it is for some
> reason down, and there is no response, i'd like it to fail over to a
> second server. Is there a way to do this in conf settings or do I
> have to make code modifications to accomplish this?

Subrequests in nginx aren't really different from ordinary
requests, they are handled in the same way - with location
matching and so on. Setting failover correctly in config (either
with proxy_next_upstream or via error_page) would do the trick for
both normal requests and subrequests.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: proxy_hide_header hiding header from ALL locations

Hello!

On Wed, Jul 28, 2010 at 02:47:28PM -0700, W. Andrew Loe III wrote:

> It looks like Content-Type is special cased:
>
> Changes with nginx 0.3.58 14 Aug 2006
>
> *) Feature: the "error_page" directive supports the variables.
>
> *) Change: now the procfs interface instead of sysctl is used on Linux.
>
> *) Change: now the "Content-Type" header line is inherited from first
> response when the "X-Accel-Redirect" was used.
>
> How can I get this behavior for ETag and Last-Modified as well?

Aha, now it's clear that you want to inherit some headers from
original response with X-Accel-Redirect. Here is the solution:

location ... {
set $x $upstream_http_...;
add_header Something $x;

proxy_pass ...
...
}

Note that you have to use extra "set" to preserve
$upstream_http_... variable from original response.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Questions about Internal named location

On Fri, Jul 30, 2010 at 09:26:42PM +0200, Rob Schultz wrote:

> > This all started when I tried to setup one nginx server that has
> > multiple php sites under one url that needs a alias for the other sites.
> > Something like this:
> >
> > root /var/www/somesite;
> >
> > location / {
> > index.html index.php;
> > }
> > location /wordpress {
> > alias /var/www/wordpress;
> > try_files $uri $uri/ @wordpress;
> > }
> > ....
>
> from the wiki http://wiki.nginx.org/Wordpress
> might try this format
> location /wordpress {
> try_files $uri $uri/ /wordpress/index.php?q=$uri&args;
> }
>
> and then have your normal php location block for fastcgi settings.

It's better to set a script name and a query string directly in
fastcgi_param to avoid surplus copy operations:

location /wordpress {
root /var/www;
try_files $uri $uri/ @wordpress;
}

location @wordpress {
...
fastcgi_param SCRIPT_FILENAME /var/www/wordpress/index.php;
fastcgi_param QUERY_STRING q=$uri&$args;
}


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Questions about Internal named location

Hi, 

On Jul 30, 2010, at 8:31 PM, heimdull wrote:


index.php is displayed in the browser correctly so my question is why
does the @www location not correctly handle the php file?

its because index.php is a real file. It never gets passed off to @www. try_files is for testing real files on the filesystem. And returns that url if it is successful. If it is unsuccesful it is passed off to @www. 

This all started when I tried to setup one nginx server that has
multiple php sites under one url that needs a alias for the other sites.
Something like this:

root /var/www/somesite;

location / {
   index.html index.php;
}
location /wordpress {
   alias /var/www/wordpress;
   try_files $uri $uri/ @wordpress;
}
....

might try this format
location /wordpress {
  try_files $uri $uri/ /wordpress/index.php?q=$uri&args;
}

and then have your normal php location block for fastcgi settings. 

Questions about Internal named location

I have a problem I can't find an answer too and I'm wondering if I'm
doing the named internal location all wrong. Here is an example of my
config

location / {
try_files $uri $uri/ @www;
}

location @www {
include fastcgi_params;

fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass myphp;
}

When I go to /index.php I get the index.php recognized as a binary file
and the browser just wants to download the file. The file that is
downloaded is correct html so I know that the file hit myphp before it
was sent to my browser.

If I add this location:

location ~ \.php$ {
fastcgi_pass myphp;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
}

index.php is displayed in the browser correctly so my question is why
does the @www location not correctly handle the php file?

This all started when I tried to setup one nginx server that has
multiple php sites under one url that needs a alias for the other sites.
Something like this:

root /var/www/somesite;

location / {
index.html index.php;
}
location /wordpress {
alias /var/www/wordpress;
try_files $uri $uri/ @wordpress;
}
....

Can anyone shed some light on how the try_files/named location works?
I'm using Nginx 0.8.36 btw

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,114926,114926#msg-114926


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Performance question regarding regex

Thank you again for your quick answer.

Igor Sysoev Wrote:
-------------------------------------------------------
> On Thu, Jul 29, 2010 at 07:44:54PM -0400, panni
> wrote:
>
> > Hey, thank you for your answer.
> >
> > try_files only seems to hit when a root
> directive is set in the config.
> > try_files only works on /home/www$uri files, not
> on /www/html$uri
> > files.
> > Is this intended? How can I make it work when a
> file called
> > /www/html/asdf.php is requested by /asdf.php?
>
> root /;
> try_files home/www$uri www/html$uri
> @ksphp;
>
> but probably it's better to use:
>
> location = /asdf.php {
> fastcgi_pass ...
> fastcgi_param SCRIPT_FILENAME
> /www/html$uri;
> ....
> }

Is try_files over three possible locations that slow or why are you
suggesting the directive for the literal string?

>
> > location / {
> > root /home/www;
> > try_files $uri "/www/html$uri"
> @ksphp;
> >
> > if ($request_filename ~*
> \.(js|css)$) {
> > return 404;
> > }
> >
> > index index.php;
> > expires 30d;
> > }
>
> Do not use "if ($request_filename":
>
> location / {
> root /home/www;
> try_files $uri @ksphp;
>
> index index.php;
> expires 30d;
> }
>
> location ~* \.(js|css)$ {
> return 404;
> }

Thank you. But why would you do this rather than an if clause? If its
faster, or my way a no-go, do you have some sort of compendium of do's
and don'ts in NGINX?

>
>
> --
> Igor Sysoev
> http://sysoev.ru/en/
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,109438,114822#msg-114822


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Trouble with rewrite module and uri_escape (and patch)

On Fri, Jul 30, 2010 at 07:01:42PM +0900, Daisuke Murase (typester) wrote:

> Hi,
>
> I have nginx/0.7.67 and a following rewrite setting:
>
> rewrite ^/entry/(.*) /entry?title=$1;
>
> When request uri contains %3b (escaped ;), it's decoded like following:
>
> Request: /entry/abc%3bdef
> Result: /entry?title=abc;def
>
> But %26 (escaped &) is not decoded:
>
> Request: /entry/abc%26def
> Result: /entry?title=abc%26def
>
> At HTTP URI spec, ; should be treated as query separator equal to & IMHO
>
> I wrote quick fix for this:
> http://gist.github.com/500097
>
> I don't know this is correct way, but I want to fix this problem.
> Review this please.

You are right, thank you.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Trouble with rewrite module and uri_escape (and patch)

Hi,

I have nginx/0.7.67 and a following rewrite setting:

rewrite ^/entry/(.*) /entry?title=$1;

When request uri contains %3b (escaped ;), it's decoded like following:

Request: /entry/abc%3bdef
Result: /entry?title=abc;def

But %26 (escaped &) is not decoded:

Request: /entry/abc%26def
Result: /entry?title=abc%26def

At HTTP URI spec, ; should be treated as query separator equal to & IMHO

I wrote quick fix for this:
http://gist.github.com/500097

I don't know this is correct way, but I want to fix this problem.
Review this please.

Regards,

--
Daisuke Murase (typester)

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Performance question regarding regex

On Thu, Jul 29, 2010 at 07:44:54PM -0400, panni wrote:

> Hey, thank you for your answer.
>
> try_files only seems to hit when a root directive is set in the config.
> try_files only works on /home/www$uri files, not on /www/html$uri
> files.
> Is this intended? How can I make it work when a file called
> /www/html/asdf.php is requested by /asdf.php?

root /;
try_files home/www$uri www/html$uri @ksphp;

but probably it's better to use:

location = /asdf.php {
fastcgi_pass ...
fastcgi_param SCRIPT_FILENAME /www/html$uri;
....
}

> location / {
> root /home/www;
> try_files $uri "/www/html$uri" @ksphp;
>
> if ($request_filename ~* \.(js|css)$) {
> return 404;
> }
>
> index index.php;
> expires 30d;
> }

Do not use "if ($request_filename":

location / {
root /home/www;
try_files $uri @ksphp;

index index.php;
expires 30d;
}

location ~* \.(js|css)$ {
return 404;
}


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年7月29日星期四

Re: Does nginx support connect method?

On Thu, Jul 29, 2010 at 8:21 PM, <nginxlist@serverphorums.com> wrote:
> Could you include connect method in nginx to give us an alternative to squid?
>
> Squid can be notorious for its CPU usage.
>
> There may be a Klondike bar in it for you. :-)

Check out Apache Traffic Server, which has a forward-proxy mode and
certainly seems to be a lot more modern in design than squid:
http://trafficserver.apache.org/docs/v2/admin/explicit.htm

--
RPM

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: hightraffic php site problems

On Thu, Jul 29, 2010 at 8:57 PM, Juergen Gotteswinter <jg@internetx.de> wrote:
> found the problem... our customer created a cronjob which dumped the mysql
> db every 2 hours. after mysqldump starts, a few secs later nginx was in
> trouble.

Try InnoDB with mysqldump --single-transaction
It would NOT lock tables.
--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Does nginx support connect method?

Could you include connect method in nginx to give us an alternative to squid?

Squid can be notorious for its CPU usage.

There may be a Klondike bar in it for you. :-)

---
posted at http://www.serverphorums.com
http://www.serverphorums.com/read.php?5,9758,177875#msg-177875

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Performance question regarding regex

Hey, thank you for your answer.

try_files only seems to hit when a root directive is set in the config.
try_files only works on /home/www$uri files, not on /www/html$uri
files.
Is this intended? How can I make it work when a file called
/www/html/asdf.php is requested by /asdf.php?

[code]
location / {
root /home/www;
try_files $uri "/www/html$uri" @ksphp;

if ($request_filename ~* \.(js|css)$) {
return 404;
}

index index.php;
expires 30d;
}
[/code]

Thank you!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,109438,114603#msg-114603


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: php-fpm and chroot on linux.

2010/7/29 Piotr Karbowski <jabberuser@gmail.com>:
> Hi nginx-en ml.
>
> This is not exactly nginx issue, but lets say this is nginx related thing.
>
> php-fpm and chroot - It is working for me good, I haven't tested performance
> yet but there is two things what I need fix before I can really use it.

There should not be a revelant gap of perf between using and not using chroot.

>
> First issue: php in chroot can't resolve names. Even puting /etc/resolv.conf
> in chroot isn't solution.

try putting also /etc/hosts and /etc/nsswitch.conf into the chroot. Is
it better ?

>
> 2nd issue: mysqld socket. If I want connect to 'localhost' it using
> unix:///var/run/mysqld.socket which is really smart and I like it (I want
> use socket if I can.). it not work, because in chrooted env there is no
> mysqld socket. There is any better way than mount -o bind 'run' dir into
> each chroot's root?

if you have only one chrooted env, you can setup mysqld to write its
unix socket into the chroot (ex:
/path/to/chroot/var/run/mysql.socket). Then the socket will be
available from inside and from outside the chroot.

If you have more than one chrooted env, then you can't use unix
socket. You'll have to use TCP on localhost.

>
> Sorry for posting it on nginx list but this is best place to ask in my
> opinion.

you should continue this conversation on
http://groups.google.com/group/highload-php-en.

>
> -- Piotr.
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

php-fpm and chroot on linux.

Hi nginx-en ml.

This is not exactly nginx issue, but lets say this is nginx related thing.

php-fpm and chroot - It is working for me good, I haven't tested
performance yet but there is two things what I need fix before I can
really use it.

First issue: php in chroot can't resolve names. Even puting
/etc/resolv.conf in chroot isn't solution.

2nd issue: mysqld socket. If I want connect to 'localhost' it using
unix:///var/run/mysqld.socket which is really smart and I like it (I
want use socket if I can.). it not work, because in chrooted env there
is no mysqld socket. There is any better way than mount -o bind 'run'
dir into each chroot's root?

Sorry for posting it on nginx list but this is best place to ask in my
opinion.

-- Piotr.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Proxy uploading

Great! Thanks for the help.

On 29 Jul 2010, at 15:14, Igor Sysoev <igor@sysoev.ru> wrote:

> On Thu, Jul 29, 2010 at 03:11:13PM +0200, Tim Child wrote:
>
>>> These directives hide/pass header from upstream to client:
>>
>> I don't need as far as I know to send the Cookie and Referer to the upstream server, but the content-length is needed for the file I think.
>>
>>>
>>>> proxy_hide_header Referer;
>>>> proxy_hide_header Cookie;
>>>> proxy_pass_header Content-Length;
>
> If you do not want to pass Cookie and Referer to the upstream, then you
> should use:
>
> proxy_set_header Cookie "";
> proxy_set_header Referer "";
>
>
> --
> Igor Sysoev
> http://sysoev.ru/en/
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Equivalent of Apache's SetEnv Variable

Fabulous. Thank you. I will try it and report back asap.

On Thu, Jul 29, 2010 at 12:21 PM, Igor Sysoev <igor@sysoev.ru> wrote:
> On Thu, Jul 29, 2010 at 11:18:07AM -0500, Raina Gustafson wrote:
>
>> Issue:
>> I'd like to configure Magento to run in multi-domain mode.
>> I've been successful doing this via Apache in the past.
>> It seems that Nginx should be equally capable, but I haven't succeeded.
>>
>> Server Specs:
>> Nginx (latest)
>> PHP 5.3.3
>> PHP-FPM enabled
>> Magento (latest)
>>
>> What I Know:
>> Apache relies on the SetEnv variable in the virtual host definition or
>> a similar instruction in an .htaccess file to achieve this
>> functionality. The specifics are here:
>> http://www.magentocommerce.com/wiki/multi-store_set_up/multiple-website-setup.
>>
>> What I Don't Know:
>> Does Nginx have an equivalent to SetEnv?
>> Can Nginx be configured to imitate this configuration through rewrites
>> or some other method?
>
> Probably, you need
>
>     server {
>
>         location / {
>             fastcgi_pass   ...
>             fastcgi_param  MAGE_RUN_CODE  base;
>             fastcgi_param  MAGE_RUN_TYPE  website;
>             ...
>         }
>     }
>
>
> --
> Igor Sysoev
> http://sysoev.ru/en/
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Equivalent of Apache's SetEnv Variable

On Thu, Jul 29, 2010 at 11:18:07AM -0500, Raina Gustafson wrote:

> Issue:
> I'd like to configure Magento to run in multi-domain mode.
> I've been successful doing this via Apache in the past.
> It seems that Nginx should be equally capable, but I haven't succeeded.
>
> Server Specs:
> Nginx (latest)
> PHP 5.3.3
> PHP-FPM enabled
> Magento (latest)
>
> What I Know:
> Apache relies on the SetEnv variable in the virtual host definition or
> a similar instruction in an .htaccess file to achieve this
> functionality. The specifics are here:
> http://www.magentocommerce.com/wiki/multi-store_set_up/multiple-website-setup.
>
> What I Don't Know:
> Does Nginx have an equivalent to SetEnv?
> Can Nginx be configured to imitate this configuration through rewrites
> or some other method?

Probably, you need

server {

location / {
fastcgi_pass ...
fastcgi_param MAGE_RUN_CODE base;
fastcgi_param MAGE_RUN_TYPE website;
...
}
}


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Equivalent of Apache's SetEnv Variable

http://wiki.nginx.org/NginxHttpFcgiModule#fastcgi_param

On 29/07/2010 17:18, Raina Gustafson wrote:
> Issue:
> I'd like to configure Magento to run in multi-domain mode.
> I've been successful doing this via Apache in the past.
> It seems that Nginx should be equally capable, but I haven't succeeded.
>
> Server Specs:
> Nginx (latest)
> PHP 5.3.3
> PHP-FPM enabled
> Magento (latest)
>
> What I Know:
> Apache relies on the SetEnv variable in the virtual host definition or
> a similar instruction in an .htaccess file to achieve this
> functionality. The specifics are here:
> http://www.magentocommerce.com/wiki/multi-store_set_up/multiple-website-setup.
>
> What I Don't Know:
> Does Nginx have an equivalent to SetEnv?
> Can Nginx be configured to imitate this configuration through rewrites
> or some other method?
>
> Larger Community to Benefit:
> There are a number of people in the Nginx forum, Magento forum, etc.
> asking about this. It would be stellar if someone could provide us
> with a definitive answer - even if that answer is 'it can't be done'
> or 'proxy Nginx to Apache'. If anyone supplies an answer via the
> mailing list, I will make sure that the answer is shared in the forums
> and elsewhere for maximum benefit.
>
> Thanks so much!
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Equivalent of Apache's SetEnv Variable

Issue:
I'd like to configure Magento to run in multi-domain mode.
I've been successful doing this via Apache in the past.
It seems that Nginx should be equally capable, but I haven't succeeded.

Server Specs:
Nginx (latest)
PHP 5.3.3
PHP-FPM enabled
Magento (latest)

What I Know:
Apache relies on the SetEnv variable in the virtual host definition or
a similar instruction in an .htaccess file to achieve this
functionality. The specifics are here:
http://www.magentocommerce.com/wiki/multi-store_set_up/multiple-website-setup.

What I Don't Know:
Does Nginx have an equivalent to SetEnv?
Can Nginx be configured to imitate this configuration through rewrites
or some other method?

Larger Community to Benefit:
There are a number of people in the Nginx forum, Magento forum, etc.
asking about this. It would be stellar if someone could provide us
with a definitive answer - even if that answer is 'it can't be done'
or 'proxy Nginx to Apache'. If anyone supplies an answer via the
mailing list, I will make sure that the answer is shared in the forums
and elsewhere for maximum benefit.

Thanks so much!

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Multiple requests on balancer

Thank you for quick reply.

Below is working config file I'm using to test issue against.
I also checked config found at http://brainspl.at/nginx.conf.txt and
still it behaves the same.

Nginx details:
nginx version: nginx/0.8.34
built by gcc 4.1.2 20080704 (Red Hat 4.1.2-46)
TLS SNI support disabled
configure arguments: --with-http_ssl_module --prefix=/usr/nginx

Ideas?

-----------

worker_processes 1;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;

keepalive_timeout 65;
gzip on;

# mongrel servers
upstream mongrels {
server 172.17.13.15:8000;
server 172.17.13.15:8001;
server 172.17.13.15:8002;
server 172.17.13.15:8003;
server 172.17.13.15:8004;
}

server {
listen 443 default ssl;
client_max_body_size 50M;

ssl_certificate cert.pem;
ssl_certificate_key cert.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X_FORWARDED_PROTO https;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_max_temp_file_size 0;

proxy_pass http://mongrels;
}

error_page 500 502 503 504 /500.html;
location = /50x.html {
root html;
}
}
}


On Thu, Jul 29, 2010 at 4:13 PM, Piotr Sikora <piotr.sikora@frickle.com> wrote:
> Hi,
>
>> However, I noticed how each request to Nginx is routed to every Mongrel:
>> is this
>> desired behavior or I, possibly, did something wrong with
>> configuration (couldn't
>> figured out yet). Any ideas to make Nginx route request only to single
>> Mongrel
>> instance?
>
> You did something wrong.
>
> Best regards,
> Piotr Sikora < piotr.sikora@frickle.com >
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Multiple requests on balancer

Hi,

> However, I noticed how each request to Nginx is routed to every Mongrel:
> is this
> desired behavior or I, possibly, did something wrong with
> configuration (couldn't
> figured out yet). Any ideas to make Nginx route request only to single
> Mongrel
> instance?

You did something wrong.

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Multiple requests on balancer

Hi guys,

I recently configured Nginx with Rails/Mongrel (mostly picking advices
from wiki)
where Nginx is doing balancing/proxy-ing on 5 Mongrels. It works perfectly :)

However, I noticed how each request to Nginx is routed to every Mongrel: is this
desired behavior or I, possibly, did something wrong with
configuration (couldn't
figured out yet). Any ideas to make Nginx route request only to single Mongrel
instance?

Thanks.

Best,
Sanel

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Proxy uploading

On Thu, Jul 29, 2010 at 03:11:13PM +0200, Tim Child wrote:

> > These directives hide/pass header from upstream to client:
>
> I don't need as far as I know to send the Cookie and Referer to the upstream server, but the content-length is needed for the file I think.
>
> >
> >> proxy_hide_header Referer;
> >> proxy_hide_header Cookie;
> >> proxy_pass_header Content-Length;

If you do not want to pass Cookie and Referer to the upstream, then you
should use:

proxy_set_header Cookie "";
proxy_set_header Referer "";


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Proxy uploading

On 29 Jul 2010, at 14:49, Igor Sysoev wrote:

> On Thu, Jul 29, 2010 at 11:45:50AM +0200, Tim Child wrote:
>
>> Hi,
>>
>> Currently using Nginx 0.7.65-1 on Ubuntu 10.04. I need to have my application upload files to another backend, so I thought I could use proxy pass, and have a certain URL be proxied to another machine (proxy_pass http://upstreamserver.com:8080/API/import) . The URL would be http//portalvm/API/upload/ that I upload to.
>>
>> Otherwise proxy to Apache running on 127.0.0.1:8000.
>>
>> What is happening is that it is indeed proxying but I am getting a 404 error from the upstreamserver.com even though the URL that I am using looks correct. The logs (error.log in Debug mode) are saying:
>>
>> http proxy header:
>> "POST /API/import/raw?transferid=unique321&name=p158cr86gf1i34t099c1gkn1vh51.tmp HTTP/1.0^M
>> Authorization: Basic base64string^M
>> Host: p upstreamserver.com:8080^M
>
> I see strange host name: "p upstreamserver.com:8080".

Actually that is my bad, as I didn't want to expose the server I was using to the whole world.

Host: upstreamserver.com:8080

>
>>
>> location ~ ^/(favicon.ico|robots.txt|sitemap.xml)$ {
>> alias /opt/media/$1;
>> expires 30d;
>> }
>
> It's better to write this as three locations:
>
> location = /favicon.ico {
> root /opt/media;
> expires 30d;
> }
>
> location = /robots.txt {
> root /opt/media;
> expires 30d;
> }
>
> location = /sitemap.xml {
> root /opt/media;
> expires 30d;
> }

Will do.


>
>> location /sitemedia {
>> alias /opt/media/;
>> expires 30d;
>> }
>> location /API/import {
>> error_log /var/log/nginx/error.log debug;
>> client_max_body_size 1000M;
>> proxy_pass http://upstreamserver.com:8080/API/import;
>
> These directives hide/pass header from upstream to client:

I don't need as far as I know to send the Cookie and Referer to the upstream server, but the content-length is needed for the file I think.

>
>> proxy_hide_header Referer;
>> proxy_hide_header Cookie;
>> proxy_pass_header Content-Length;
>
>> proxy_set_header Authorization "Basic base64string";
>> }
>> location / {
>> proxy_pass http://portalvm;
>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>> add_header X-Handled-By $upstream_addr;
>> }
>
> I do not understand, what you want to proxy:
> "/API/upload/" or "/API/import/" ?
>

Again my bad, I mean't /API/import/

I have actually got it working when I removed a header "INDEX" that was getting sent.

Thanks,

Tim.


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: hightraffic php site problems

found the problem... our customer created a cronjob which dumped the
mysql db every 2 hours. after mysqldump starts, a few secs later nginx
was in trouble.

doh. fail :)

On 07/29/2010 02:05 PM, Phillip Oldham wrote:
> Can you provide information on your PHP set-up? Eg. how many
> processes/FCGI children are running, etc?
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Proxy uploading

On Thu, Jul 29, 2010 at 11:45:50AM +0200, Tim Child wrote:

> Hi,
>
> Currently using Nginx 0.7.65-1 on Ubuntu 10.04. I need to have my application upload files to another backend, so I thought I could use proxy pass, and have a certain URL be proxied to another machine (proxy_pass http://upstreamserver.com:8080/API/import) . The URL would be http//portalvm/API/upload/ that I upload to.
>
> Otherwise proxy to Apache running on 127.0.0.1:8000.
>
> What is happening is that it is indeed proxying but I am getting a 404 error from the upstreamserver.com even though the URL that I am using looks correct. The logs (error.log in Debug mode) are saying:
>
> http proxy header:
> "POST /API/import/raw?transferid=unique321&name=p158cr86gf1i34t099c1gkn1vh51.tmp HTTP/1.0^M
> Authorization: Basic base64string^M
> Host: p upstreamserver.com:8080^M

I see strange host name: "p upstreamserver.com:8080".

> Connection: close^M
> User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8^M
> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8^M
> Accept-Language: en,en-us;q=0.7,sv;q=0.3^M
> Accept-Encoding: gzip,deflate^M
> Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7^M
> Content-Type: application/octet-stream^M
> RunAs: username^M
> INDEX: 0^M
> Referer: http://portalvm/my/uploadpage/^M
> Content-Length: 2809716^M
> Cookie: sessionid=09520e8dfd27d9ee86781b928ed20689^M
> Pragma: no-cache^M
> Cache-Control: no-cache^M
> ^M
> "
>
>
> Then there are a lot of logs streaming the file to the upstream server such as:
>
> http upstream request: "/API/import/raw?transferid=unique321&name=p158csbronq6718k215au1ctp1ulo1.tmp"
> 2010/07/28 17:57:39 [debug] 3950#0: *4 http upstream send request handler
> 2010/07/28 17:57:39 [debug] 3950#0: *4 http upstream send request
> 2010/07/28 17:57:39 [debug] 3950#0: *4 read: 9, 0000000000C9B640, 8192, 0
> 2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer buf fl:0 s:725
> 2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer buf fl:0 s:345
> 2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer buf fl:0 s:8192
> 2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer in: 0000000000C9AD88
> 2010/07/28 17:57:39 [debug] 3950#0: *4 writev: 9262
> 2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer out: 0000000000000000
>
> Then at the end::
>
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http upstream process header
> 2010/07/28 17:58:09 [debug] 3950#0: *4 malloc: 0000000000CECF20:4096
> 2010/07/28 17:58:09 [debug] 3950#0: *4 recv: fd:15 260 of 4096
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy status 404 "404 Not Found"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "X-Powered-By: Servlet/2.5"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Server: Sun GlassFish Enterprise Server v2.1.1"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Content-Type: text/plain"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Content-Length: 57"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Date: Wed, 28 Jul 2010 15:58:09 GMT"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Connection: close"
> 2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header done
> 2010/07/28 17:58:09 [debug] 3950#0: *4 malloc: 0000000000CEDF30:4096
> 2010/07/28 17:58:09 [debug] 3950#0: *4 HTTP/1.1 404 Not Found^M
> Server: nginx/0.7.65^M
> Date: Wed, 28 Jul 2010 15:58:09 GMT^M
> Content-Type: text/plain^M
> Transfer-Encoding: chunked^M
> Connection: keep-alive^M
> X-Powered-By: Servlet/2.5^M
> Content-Encoding: gzip^M
>
>
>
> What I can't understand is the headers look correct, and so does the HTTP upstream request URL. In fact if I try and use the same headers and URL in a util that lets me post to the server it creates an empty file.
>
> Any idea on why I am getting a 404?
>
> Thanks,
>
> Tim.
>
>
>
>
> In my nginx.conf I have this (base64string - obviously changed):
>
> http {
> include /etc/nginx/mime.types;
> gzip on;
> gzip_comp_level 2;
> gzip_proxied any;
> gzip_disable msie6;
> gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
>
> upstream portalvm {
> server 127.0.0.1:8000;
> }
>
> server {
> listen 80;
>
> location ~ ^/(favicon.ico|robots.txt|sitemap.xml)$ {
> alias /opt/media/$1;
> expires 30d;
> }

It's better to write this as three locations:

location = /favicon.ico {
root /opt/media;
expires 30d;
}

location = /robots.txt {
root /opt/media;
expires 30d;
}

location = /sitemap.xml {
root /opt/media;
expires 30d;
}

> location /sitemedia {
> alias /opt/media/;
> expires 30d;
> }
> location /API/import {
> error_log /var/log/nginx/error.log debug;
> client_max_body_size 1000M;
> proxy_pass http://upstreamserver.com:8080/API/import;

These directives hide/pass header from upstream to client:

> proxy_hide_header Referer;
> proxy_hide_header Cookie;
> proxy_pass_header Content-Length;

> proxy_set_header Authorization "Basic base64string";
> }
> location / {
> proxy_pass http://portalvm;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> add_header X-Handled-By $upstream_addr;
> }

I do not understand, what you want to proxy:
"/API/upload/" or "/API/import/" ?


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Nginx Active & Writing Connections exponential growth.

Hi there!
First of all, I am a new user of the nginx mailing list so there is a short introduction of myself. My name is Antonio, and I have been working for a long time as Web Operations Engineer at Uptodown.com, VisualizeUs.com and other minor projects. 

We are running nginx in Uptodown.com for static content, without problems, and in VisualizeUs.com, since 2 years ago, for static and dynamic content without problems . But it is in visualizeUs where we are having some problems, since we updated nginx to nginx/0.7.67, under debian Lenny 32 bits (aws medium instance), with php-fcgi

There is the munin output for nginx connections.
http://i29.tinypic.com/21y781.png

We noticed, an exponential growth for connections, better apreciated through the graphic.

There is our nginx configuration:


And our ulimit for www-data:

maxfds ..fs.file-max = 200000
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 20480
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) unlimited
real-time priority              (-r) 1
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

As active connections grow, response time sligthly increases, so a nginx reboot is needed to get back to optimal response time.Nginx reboots are apreciated in munin graphic when it returns to normal values, in other case, exponential grows continues.

We have been tweaking system limits without noticiable diferences. Also we have been working around php-cgi configuration, finding out that the exponential growth of connections still exists but in a slower rate.

Anything that make us point in the right direction will be appreciated.

Thanks in advanced.

Best Regards.
Antonio.

Re: hightraffic php site problems

Can you provide information on your PHP set-up? Eg. how many
processes/FCGI children are running, etc?

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: hightraffic php site problems

Hi,

> No, these are seconds. "fastcgi_connect_timeout 90ms" is in milliseconds.

Indeed, I guess that "ngx_conf_set_msec_slot" confused me ;)
Sorry for the noise.

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Proxy uploading

Hi,

Currently using Nginx 0.7.65-1 on Ubuntu 10.04. I need to have my application upload files to another backend, so I thought I could use proxy pass, and have a certain URL be proxied to another machine (proxy_pass http://upstreamserver.com:8080/API/import) . The URL would be http//portalvm/API/upload/ that I upload to.

Otherwise proxy to Apache running on 127.0.0.1:8000.

What is happening is that it is indeed proxying but I am getting a 404 error from the upstreamserver.com even though the URL that I am using looks correct. The logs (error.log in Debug mode) are saying:

http proxy header:
"POST /API/import/raw?transferid=unique321&name=p158cr86gf1i34t099c1gkn1vh51.tmp HTTP/1.0^M
Authorization: Basic base64string^M
Host: p upstreamserver.com:8080^M
Connection: close^M
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8^M
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8^M
Accept-Language: en,en-us;q=0.7,sv;q=0.3^M
Accept-Encoding: gzip,deflate^M
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7^M
Content-Type: application/octet-stream^M
RunAs: username^M
INDEX: 0^M
Referer: http://portalvm/my/uploadpage/^M
Content-Length: 2809716^M
Cookie: sessionid=09520e8dfd27d9ee86781b928ed20689^M
Pragma: no-cache^M
Cache-Control: no-cache^M
^M
"


Then there are a lot of logs streaming the file to the upstream server such as:

http upstream request: "/API/import/raw?transferid=unique321&name=p158csbronq6718k215au1ctp1ulo1.tmp"
2010/07/28 17:57:39 [debug] 3950#0: *4 http upstream send request handler
2010/07/28 17:57:39 [debug] 3950#0: *4 http upstream send request
2010/07/28 17:57:39 [debug] 3950#0: *4 read: 9, 0000000000C9B640, 8192, 0
2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer buf fl:0 s:725
2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer buf fl:0 s:345
2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer buf fl:0 s:8192
2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer in: 0000000000C9AD88
2010/07/28 17:57:39 [debug] 3950#0: *4 writev: 9262
2010/07/28 17:57:39 [debug] 3950#0: *4 chain writer out: 0000000000000000

Then at the end::

2010/07/28 17:58:09 [debug] 3950#0: *4 http upstream process header
2010/07/28 17:58:09 [debug] 3950#0: *4 malloc: 0000000000CECF20:4096
2010/07/28 17:58:09 [debug] 3950#0: *4 recv: fd:15 260 of 4096
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy status 404 "404 Not Found"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "X-Powered-By: Servlet/2.5"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Server: Sun GlassFish Enterprise Server v2.1.1"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Content-Type: text/plain"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Content-Length: 57"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Date: Wed, 28 Jul 2010 15:58:09 GMT"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header: "Connection: close"
2010/07/28 17:58:09 [debug] 3950#0: *4 http proxy header done
2010/07/28 17:58:09 [debug] 3950#0: *4 malloc: 0000000000CEDF30:4096
2010/07/28 17:58:09 [debug] 3950#0: *4 HTTP/1.1 404 Not Found^M
Server: nginx/0.7.65^M
Date: Wed, 28 Jul 2010 15:58:09 GMT^M
Content-Type: text/plain^M
Transfer-Encoding: chunked^M
Connection: keep-alive^M
X-Powered-By: Servlet/2.5^M
Content-Encoding: gzip^M

What I can't understand is the headers look correct, and so does the HTTP upstream request URL. In fact if I try and use the same headers and URL in a util that lets me post to the server it creates an empty file.

Any idea on why I am getting a 404?

Thanks,

Tim.


In my nginx.conf I have this (base64string - obviously changed):

http {
include /etc/nginx/mime.types;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_disable msie6;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

upstream portalvm {
server 127.0.0.1:8000;
}

server {
listen 80;

location ~ ^/(favicon.ico|robots.txt|sitemap.xml)$ {
alias /opt/media/$1;
expires 30d;
}
location /sitemedia {
alias /opt/media/;
expires 30d;
}
location /API/import {
error_log /var/log/nginx/error.log debug;
client_max_body_size 1000M;
proxy_pass http://upstreamserver.com:8080/API/import;
proxy_hide_header Referer;
proxy_hide_header Cookie;
proxy_pass_header Content-Length;
proxy_set_header Authorization "Basic base64string";
}
location / {
proxy_pass http://portalvm;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header X-Handled-By $upstream_addr;
}

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: hightraffic php site problems

On Thu, Jul 29, 2010 at 11:37:58AM +0200, Piotr Sikora wrote:

> Hi,
>
> > fastcgi_connect_timeout 90;
> > fastcgi_send_timeout 90;
> > fastcgi_read_timeout 90;
>
> Those values are in milliseconds, so you should probably increase them ;)

No, these are seconds. "fastcgi_connect_timeout 90ms" is in milliseconds.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: hightraffic php site problems

Hi,

> fastcgi_connect_timeout 90;
> fastcgi_send_timeout 90;
> fastcgi_read_timeout 90;

Those values are in milliseconds, so you should probably increase them ;)

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

hightraffic php site problems

Hi,

i got some problems with a high loaded php site running nginx + php
(without fpm).

i get the following error message from time to time:

162503979 upstream timed out (110: Connection timed out) while reading
response header from upstream,


While this occours, nginx stops delivering php content and drops the
error document.

Heres a part of my nginx conf

user webuser1 nginx;
worker_processes 4;
worker_rlimit_nofile 65000;

#error_log /var/log/nginx/error.log debug;
error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;

pid /var/run/nginx.pid;

events {
worker_connections 8192;
use epoll;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr $remote_user [$time_local] '
'"$request" $status $bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$gzip_ratio"';

client_body_buffer_size 1024k;
client_header_buffer_size 128k;
large_client_header_buffers 16 16k;

access_log off;
server_tokens off;

sendfile on;
tcp_nopush on;
tcp_nodelay on;

client_body_timeout 15;
client_header_timeout 15;
keepalive_timeout 5 15;
send_timeout 15;

gzip on;
gzip_static on;
gzip_buffers 16 8k;
gzip_comp_level 5;
gzip_http_version 1.0;
gzip_min_length 0;
gzip_types text/plain text/css image/x-icon image/bmp;
gzip_vary on;


output_buffers 1 64k;
postpone_output 1460;
client_max_body_size 8m;
server_names_hash_bucket_size 256;

open_file_cache max=2000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
fastcgi_connect_timeout 90;
fastcgi_send_timeout 90;
fastcgi_read_timeout 90;
fastcgi_buffer_size 512k;
fastcgi_buffers 8 512k;
fastcgi_busy_buffers_size 512k;
fastcgi_temp_file_write_size 512k;
fastcgi_intercept_errors on;
fastcgi_ignore_client_abort on;

# Load config files from the /etc/nginx/conf.d directory
include /etc/nginx/conf.d/*.conf;

}


Any Ideas how i could resolv this?

Thanks!

Juergen

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: case insensitive

On Thu, Jul 29, 2010 at 11:45:23AM +0400, Boris Dolgov wrote:

> On Thu, Jul 29, 2010 at 10:47 AM, Igor Sysoev <igor@sysoev.ru> wrote:
> > On Wed, Jul 28, 2010 at 11:26:24PM -0700, Eire Angel wrote:
> >> I need to serve images to be case insensitive.
> >> too many of the external links were created using the incorrect case for me to
> >> just create redirects for them all
> >> I am trying to use :
> >> location ~* ^.+.(jpg|jpeg|gif|png)$ {
> >>  root              /var/www/my_app/current/public/images;
> >> access_log        off;
> >> expires           30d;
> >> }
> >>
> >> i have also tried using :
> >> location ~* ^/images/$ {}
> >> with the same results.  please help
> >
> > nginx does not support case insensitive file access on a case sensitive file
> > system. Theoretically this can be implemented, however, it requires
> > a lot of overhead: instead of single open() syscall, nginx has to call
> > opendir()/readir()/closedir() which involve about ten syscalls. Although
> > frequently requested files can be cached in open_file_cache.
> You can also try renaming all your images to lowercase and then use
> something like this:
> location ~* (?P<name>.*)\.(?P<ext>jpg|jpeg|gif|png)$
> {
> perl_set new_name 'sub { my $r = shift; return lc($r->variable("name"); }'
> perl_set new_ext 'sub { my $r = shift; return lc($r->variable("ext"); }'
> try_files $new_name.$new_ext /404error;
> }

A note: perl_set can be defined on http level only.
It can be simpler:

http {
perl_set $lc_uri 'sub { my $r = shift; return lc($r->uri); }'

server {
location / {
try_files $lc_uri =404;
}


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: case insensitive

On Thu, Jul 29, 2010 at 10:47 AM, Igor Sysoev <igor@sysoev.ru> wrote:
> On Wed, Jul 28, 2010 at 11:26:24PM -0700, Eire Angel wrote:
>> I need to serve images to be case insensitive.
>> too many of the external links were created using the incorrect case for me to
>> just create redirects for them all
>> I am trying to use :
>> location ~* ^.+.(jpg|jpeg|gif|png)$ {
>>  root              /var/www/my_app/current/public/images;
>> access_log        off;
>> expires           30d;
>> }
>>
>> i have also tried using :
>> location ~* ^/images/$ {}
>> with the same results.  please help
>
> nginx does not support case insensitive file access on a case sensitive file
> system. Theoretically this can be implemented, however, it requires
> a lot of overhead: instead of single open() syscall, nginx has to call
> opendir()/readir()/closedir() which involve about ten syscalls. Although
> frequently requested files can be cached in open_file_cache.
You can also try renaming all your images to lowercase and then use
something like this:
location ~* (?P<name>.*)\.(?P<ext>jpg|jpeg|gif|png)$
{
perl_set new_name 'sub { my $r = shift; return lc($r->variable("name"); }'
perl_set new_ext 'sub { my $r = shift; return lc($r->variable("ext"); }'
try_files $new_name.$new_ext /404error;
}

--
Boris Dolgov.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年7月28日星期三

Re: case insensitive

On Wed, Jul 28, 2010 at 11:26:24PM -0700, Eire Angel wrote:

> I need to serve images to be case insensitive.
> too many of the external links were created using the incorrect case for me to
> just create redirects for them all
> I am trying to use :
> location ~* ^.+.(jpg|jpeg|gif|png)$ {
> root /var/www/my_app/current/public/images;
> access_log off;
> expires 30d;
> }
>
> i have also tried using :
> location ~* ^/images/$ {}
> with the same results. please help

nginx does not support case insensitive file access on a case sensitive file
system. Theoretically this can be implemented, however, it requires
a lot of overhead: instead of single open() syscall, nginx has to call
opendir()/readir()/closedir() which involve about ten syscalls. Although
frequently requested files can be cached in open_file_cache.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

case insensitive

I need to serve images to be case insensitive.
too many of the external links were created using the incorrect case for me to just create redirects for them all
I am trying to use :
location ~* ^.+.(jpg|jpeg|gif|png)$ {
 root              /var/www/my_app/current/public/images;               
access_log        off;
expires           30d;
}

i have also tried using :
location ~* ^/images/$ {}
with the same results.  please help


Chris

Re: subrequest failover

29.07.2010, 04:36, "James Lyons" <james.lyons@gmail.com>:
> I am trying to use nginx as reverse proxy to perform failover from 1
> critical service to another.
>
> When processing a request, it sends a request (http GET) to a backend
> (custom http server) using SubRequest methods. If it is for some
> reason down, and there is no response, i'd like it to fail over to a
> second server. Is there a way to do this in conf settings or do I
> have to make code modifications to accomplish this?
>
> Any help appreciated.

upstream failover {
server 1.1.1.1;
server 2.2.2.2 backup;
}

server {
[...]
location / {
proxy_pass http://failover;
}
[...]
}
--
br, Denis F. Latypoff.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

subrequest failover

I am trying to use nginx as reverse proxy to perform failover from 1
critical service to another.

When processing a request, it sends a request (http GET) to a backend
(custom http server) using SubRequest methods. If it is for some
reason down, and there is no response, i'd like it to fail over to a
second server. Is there a way to do this in conf settings or do I
have to make code modifications to accomplish this?

Any help appreciated.

-James-

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx as SSL terminating server

Quite simply: no.

You cannot stop the first nginx from buffering requests. You can (and
should!) stop it from buffering responses with the proxy_buffering
directive:

proxy_buffering off;

I have a similar setup (minus the haproxy layer,
passenger_global_queue is good enough) and would also like to do this.
I've tried messing with the proxy buffer sizes but it doesn't seem to
make any significant difference with large uploads and opens up DoS
opportunities.

On Tue, Jul 27, 2010 at 12:11 PM, joshua <nginx-forum@nginx.us> wrote:
> We have the following setup:
>
> firewall --> single nginx instance (SSL termination) --> haproxy -->
> multiple nginx/unicorn instances (via unix socket)
>
> Is it recommendable to turn request buffering off at the first nginx?
> Ideally things like uploads would be buffered at the final nginx
> instances. The first one is only there to terminate SSL and pass
> requests on to haproxy.
>
> Thanks,
> Joshua Sierles
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,113599,113599#msg-113599
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: proxy_hide_header hiding header from ALL locations

It looks like Content-Type is special cased:

Changes with nginx 0.3.58 14 Aug 2006

*) Feature: the "error_page" directive supports the variables.

*) Change: now the procfs interface instead of sysctl is used on Linux.

*) Change: now the "Content-Type" header line is inherited from first
response when the "X-Accel-Redirect" was used.

How can I get this behavior for ETag and Last-Modified as well?

On Wed, Jul 28, 2010 at 2:26 PM, W. Andrew Loe III <andrew@andrewloe.com> wrote:
> I have a two phase setup commonly implemented with S3.
>
> I want to set ETag and Last-Modified in my Rails application and have
> them take precedence over the S3 ETag and Last-Modified.
>
> location ^~ /AWSS3/ {
>  # Prevent Client headers from going to nginx.
>  proxy_pass_request_headers off;
>
>  # Prevent nginx from overwriting our headers.
>  proxy_hide_header "Content-Type";
>  proxy_hide_header "Last-Modified";
>  proxy_hide_header "ETag";
>  proxy_hide_header "Content-Disposition";
>
>  # Hide Amazon Headers
>  proxy_hide_header X-Amz-Id-2;
>  proxy_hide_header X-Amz-Request-Id;
>
>  # Have Amazon do the work buffering the request.
>  proxy_set_header Host 's3.amazonaws.com'; # the bucket is specified in the url
>
>  # Force Amazon to do the heavy lifting.
>  proxy_buffering off;
>
>  # Retry if Amazon freaks out
>  proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
>
>  # Ensure the requests are always gets.
>  proxy_method GET;
>  proxy_pass_request_body off;
>  proxy_set_header Content-Length "";
>
>  # Proxy to S3.
>  proxy_pass http://s3/;
>  proxy_hide_header ETag;
>  proxy_hide_header Last-Modified;
>
>  internal;
> }
>
> What I see on my client is a valid Content-Type and
> Content-Disposition set by my Rails application but ETag and
> Last-Modified are not set. If I remove those proxy_hide_header
> directives the Headers are present but they are the values S3 returns
> not the values I returned from Rails.
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

proxy_hide_header hiding header from ALL locations

I have a two phase setup commonly implemented with S3.

I want to set ETag and Last-Modified in my Rails application and have
them take precedence over the S3 ETag and Last-Modified.

location ^~ /AWSS3/ {
# Prevent Client headers from going to nginx.
proxy_pass_request_headers off;

# Prevent nginx from overwriting our headers.
proxy_hide_header "Content-Type";
proxy_hide_header "Last-Modified";
proxy_hide_header "ETag";
proxy_hide_header "Content-Disposition";

# Hide Amazon Headers
proxy_hide_header X-Amz-Id-2;
proxy_hide_header X-Amz-Request-Id;

# Have Amazon do the work buffering the request.
proxy_set_header Host 's3.amazonaws.com'; # the bucket is specified in the url

# Force Amazon to do the heavy lifting.
proxy_buffering off;

# Retry if Amazon freaks out
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;

# Ensure the requests are always gets.
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length "";

# Proxy to S3.
proxy_pass http://s3/;
proxy_hide_header ETag;
proxy_hide_header Last-Modified;

internal;
}

What I see on my client is a valid Content-Type and
Content-Disposition set by my Rails application but ETag and
Last-Modified are not set. If I remove those proxy_hide_header
directives the Headers are present but they are the values S3 returns
not the values I returned from Rails.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: book Nginx HTTP Server

Am 28.07.2010 um 21:32 schrieb Nuno Magalhães:

> Greetings,
>
> Has anyone read this? Any opinions? Does anyone know on what
> version(s) is it based? Does it cover nginx on windows?
>


FYI, if you also haven't heard about it: it's the book mentioned on
the frontpage of nginx.org (haven't visited that part of the site for
quite some time, so it came as quite a surprise)


It seems to be brand new.
So I'd expect it to cover 0.7.x +
It does look a bit pricey for 340 pages, but I'm thinking of asking my
boss to pay for it ;-)


Update: in the sample-chapter, there's a screenshot of a window with
"src/nginx-0.7.66" in the title.
;-)

Also, the sample-chapter actually looks very nice (from a quick
glance). Maybe I was a bit too quick with my verdict....

Rainer


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

book Nginx HTTP Server

Greetings,

Has anyone read this? Any opinions? Does anyone know on what
version(s) is it based? Does it cover nginx on windows?

TIA,
Nuno

--
()  ascii-rubanda kampajno - kontraŭ html-a retpoŝto
/\  ascii ribbon campaign - against html e-mail

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

nginx's hashtable key hash function

This message assumes some understanding of nginx's hash table algorithms in ngx_hash.c.

I'm having some problems with nginx's server_names hash table. I've dug into the code and believe I have found the issue. I'm currently using 0.7.65 but a quick check in the 0.8 code does not appear to show any changes to this part of the code.

For Drupal Gardens (www.drupalgardens.com), we currently have about 15,000 domain names in our nginx conf file, and new domain names coming in all the time. We are unable to use a single virtual host for all sites, so we really do need to list that many domain names explicitly (and our hope is to list at least 10x more on a single nginx server). The vast majority of the names are somename.drupalgardens.com, though some customers set up their own custom somename.tld.

Until recently, when we had about 10-12,000 domains, our server_names_hash_max_size was up to 131072, with server_names_hash_bucket_size of 128 (we're using 64-bit EC2 instances so I think the cache line size is 64 bytes, but see below for why that probably isn't relevant). Obviously 131k is a lot more than 12k but the documentation says you should only need the max size to be about the same as the number of domain names. Somewhere around 13,000 domains we got the error message "could not build the server_names_hash, increase max_size or bucket_size" again, so bumped max size up to 196608. That's working for now but just begs the question: Why does it have to be so large?

So, I dug into the code, and this is what I found:

* Each hash table entry consumes space in a bucket. The space required is the length of the domain name, rounded up to sizeof(void *) (8 on a 64-bit machine), with some overhead to store the domain's actual length as well. Since ".drupalgardens.com" is 18 characters, all entries consume at least 24 bytes in a bucket, and most consume 32 bytes or more.

* With a hash bucket size of 64 or 128, a bucket is full after 4 or 5 entries hash to it.

* The hash algorithm isn't that great. The hash key algorithm, from ngx_hash.h, is:

#define ngx_hash(key, c)   ((ngx_uint_t) key * 31 + c)

In ngx_hash_init() in ngx_hash.c, I removed the #if 0 to enable logging of the hash keys. In a test environment with about 11k domains and max_size of 196608 (ngx_hash_init() choose 195609 as the actual hash size), I found that there were about 11k unique hash keys (which is good), but that the most popular keys had 4 collisions, and a decent number had 2-3 collisions. Despite the fact that 95% of the hash table entries were empty, one more collision on a "most popular key" would force us to increase the max_size again.

* I increased the max_size to 1,000,000 (ngx_hash_init() choose 999000 as the actual hash size) and found that there were *still* keys that had 4 collisions, despite the fact that now 99.5% of the hash table entries were empty.

* For ha-ha's, I then set max_size to 10,000,000... and broke the heuristic in ngx_hash_init() that controls the hash size search space:

    if (hinit->max_size > 10000 && hinit->max_size / nelts < 100) {
        start = hinit->max_size - 1000;
    }

With max_size of 10,000,000 and nelts of 11kk, max_size/nelts is greater than 100, so start remained as set by the previous code:

    start = nelts / (bucket_size / (2 * sizeof(void *)));

Oh a 64-bit machine, that means that start = 11k / (128 / 16) =~ 1500. So ngx_hash_init() tried to create the server names hash with a max_size starting at 1500 and increasing by 1 until it got to somewhere in the 64k range, the first value that got lucky enough to avoid enough hash collisions to actually work. However, under these conditions it took nginx 15 seconds to start since it tried to re-create the hashtable so many times!

So, the question of course is: What is my best course of action? Options include:

1. Keep making the hash table bigger. I don't think that will really help since it is already 95% empty and too many collisions are still occurring.

2. Make the bucket size bigger. This impacts performance for every hash table entry that has many collisions, but even though some keys have many collisions most seem not to, so the impact won't be much that often. Of course, this also means that a hash bucket will no longer fit into a CPU cache entry, but I think that level of optimization is way beyond what I need to be worrying about right now.

3. Rebuild nginx with a better hash algorithm. Of course, the hash algorithm has to run on every request, so it needs to be fast.

4. Change ngx_hash.c to have a different collision-handling strategy, e.g. increment the key on a collision.

It seems pretty clear that #2 is the most expedient choice for me. It also seems like ngx_hash_init()'s strategy of trying to find the minimum possible hash table size could be improved. Perhaps instead of start++ each time through the loop, you could binary search between start and max_size. Or just always create the hash table to have max_size entries.

I'm just curious for feedback from the nginx developers about whether my analysis is correct and whether nginx's server names hash table strategy can be improved.

Thanks,

Barry



--
Barry Jaspan
Senior Architect | Acquia
barry.jaspan@acquia.com | (c) 617.905.2208 | (w) 978.296.5231

"Get a free, hosted Drupal 7 site: http://www.drupalgardens.com"


nginx-0.8.47

Changes with nginx 0.8.47 28 Jul 2010

*) Bugfix: $request_time variable had invalid values for subrequests.

*) Bugfix: errors intercepted by error_page could be cached.

*) Bugfix: a cache manager process my got caught in an endless loop, if
max_size parameter was used; the bug had appeared in 0.8.46.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx