2009年9月30日星期三

Re: SSL session_id variable

Hi Igor,

Are there any plans to add some sort of distributed SSL session cache
(like distcache for apache)?

Thanks!

Regards,
Omar

2009/9/28 Igor Sysoev <is@rambler-co.ru>:
> On Sun, Sep 27, 2009 at 08:37:50PM +0200, Sen Haerens wrote:
>
>> Igor Sysoev wrote:
>> > The attached patch adds $ssl_session_id variable.
>>
>> Dear Igor,
>>
>> Thank you for providing this patch.
>> It's working great with Nginx 0.7.62. ;-)
>
> Here is the new more correct patch.
>
>
> --
> Igor Sysoev
> http://sysoev.ru/en/

Re: Nginx proxying to Apache: what about mod_rewrite?

Hi Héctor,

The app works ok, so Apache and Nginx are running.
It is just that I need to keep the index.php file to make it work :)
For example, i have

http://www.domain.com/index.php/blog/id/title.html


But it should work without index.php

http://www.domain.com/blog/id/title.html


Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10329#msg-10329

Re: Nginx proxying to Apache: what about mod_rewrite?

First check if nginx proxy to apache is working properly. Try with a simple html file with no rewrite/redirections. Does it work?
It seems that your configuration is ok, also check the apache logs to see is something wrong.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10315#msg-10315

Re: (bug?)Timeout when proxy-pass 0 byte file

# HG changeset patch
# User Maxim Dounin <mdounin@mdounin.ru>
# Date 1254336299 -14400
# Node ID 3a18992ceed641398ac911ad7230924ba2f28929
# Parent 7688992d2abb6759b0a91c4b0cf86802d27cbc4a
Cache: send correct special buffer for empty responses.

diff --git a/src/http/ngx_http_file_cache.c b/src/http/ngx_http_file_cache.c
--- a/src/http/ngx_http_file_cache.c
+++ b/src/http/ngx_http_file_cache.c
@@ -798,7 +798,7 @@ ngx_http_cache_send(ngx_http_request_t *

size = c->length - c->body_start;
if (size == 0) {
- return rc;
+ return ngx_http_send_special(r, NGX_HTTP_LAST);
}

b->file_pos = c->body_start;
Hello!

On Wed, Sep 30, 2009 at 12:42:52PM +0400, Maxim Dounin wrote:

> On Wed, Sep 30, 2009 at 11:26:59AM +0800, tOmasEn wrote:

[...]

> > the initiall request when there isn't and cache, everything is ok. the
> > following request to same url will wait until timeout.
>
> Ok, so the problem is cache. Thanks, I'm able to reproduce it
> here. I'll take a look later today how to fix it.

Patch.

Maxim Dounin

>
> Maxim Dounin
>
> [...]
>
> >
> > On Tue, Sep 29, 2009 at 10:48 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> >
> > > Hello!
> > >
> > > On Tue, Sep 29, 2009 at 10:08:51PM +0800, tOmasEn wrote:
> > >
> > > > I been expirencing very slow page load when use nginx as frontend(with
> > > > proxy_pass) for a while.
> > > >
> > > > After some test and debug, i found that it always timeout on response
> > > > of 0 byte file.
> > > >
> > > > So i think there might be a bug when nginx running on proxy mode and
> > > > serving 0 byte files. The frontend will consider there should be more
> > > > data and wait until timeout or something like this.
> > >
> > > Could you please provide nginx -V output and debug log?
> > >
> > > Maxim Dounin
> > >
> > > >
> > > > Btw. Nginx is great. Thanks
> > > >
> > > > tomasen
> > > >
> > > > --
> > > > 从我的移动设备发送
> > > >
> > >
> > >
>

Re: Nginx proxying to Apache: what about mod_rewrite?

By the way, this is a bigger snippet of my nginx.conf file, showing how I'm dealing with statics.
I just need to connect properly nginx and Apache, as the statics are server via another domain...

server {
listen 0.0.0.0:80;
server_name static.domain.com;
location / {
root /home/web/static/;
expires 40d;
add_header Cache-Control public;
}
}
server {
listen 0.0.0.0:80;
server_name domain.com www.domain.com;
location / {
#proxy to Apache
proxy_pass http://127.0.0.1:8030;
}
}


Thanks!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10303#msg-10303

Re: Nginx proxying to Apache: what about mod_rewrite?

To Gabriel:
Actually, I get an Apache 404 Error:
"The requested URL /es was not found on this server."

To Héctor:
Have tried also your suggestion, but then I can't get the ngix.conf file to work :)
Where do I have to write this code exactly?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10300#msg-10300

Re: Nginx proxying to Apache: what about mod_rewrite?

<quote who="illarra">

> RewriteRule ^(.*)$ index.php?r=$1?%{QUERY_STRING}
>
> The question is, where should I put the rewrite condition? in Nginx
> (adapted, of course) or Apache?

You could do either, but you may as well do it in nginx and save Apache the
effort. As a bonus, you can get nginx to serve static files before it passes
the rewritten request to Apache:

try_files $uri $uri/ /index.php?r=$request_uri;

Two birds. One stone. :-)

- Jeff

--
linux.conf.au 2010: Wellington, NZ http://www.lca2010.org.nz/

"Maybe you should put some shorts on or something, if you want to keep
fighting evil today." - The Bowler, Mystery Men

Re: Nginx proxying to Apache: what about mod_rewrite?

Hi,

It's ok, since you're using proxy_pass in ngnix, rewrite rules in Apache must work. I have a similar configuration and it works fine.
What happens when you try to access the domain? 404? index.php?

Regards,

Héctor Paz

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10289#msg-10289

Re: X-Accel-Redirect

Most of that is right, but, the file does not have to be on disk. If
you accel-redirect using an http:// link instead of a file path, that
will also work.

On Wed, Sep 30, 2009 at 1:59 AM, Jeff Waugh <jdub@bethesignal.org> wrote:
> <quote who="pepejose">
>
>> -- In the first case X-Accel-Redirect not working because the image is in
>> memory?  -- If I change the second case to:
>>
>> header("Content-type: image/jpeg"); header("X-Accel-Redirect: /$image");
>> readfile($image);
>>
>> how can I know if the header X-Accel-Redirect is working?
>
> The X-Accel-Redirect header basically says:
>
>  Dear nginx (or some other frontend),
>
>  I know you're talking to me with fastcgi or http (proxy), but now I've
>  figured out that I'm just going to be sending the client a file. You can
>  do this better than me (by talking directly to the kernel and disk, and
>  not sending the file over the fastcgi or proxy connection), so here's the
>  name of the file.
>
>  Love,
>
>  Backend Application
>
>
> So, what you want to do in your backend is this:
>
>  header("Content-type: image/jpeg");
>  if ( X_ACCEL_DIRECT ) { // this shoudl configurable in a productised app
>    header("X-Accel-Redirect: /$image");
>    exit;
>  }
>  readfile($image); // send the image if we're not using X-Accel-Redirect
>
>
> The initial path should probably be mildly more unique though, so you can
> deal with it separately in your nginx configuration. But not necessarily.
>
> Short answer: X-Accel-Redirect is helpful only when the file is on disk.
>
> :-)
>
> - Jeff
>
> --
> linux.conf.au 2010: Wellington, NZ                http://www.lca2010.org.nz/
>
>  "First-born children are less creative but more stable, while last-born
>         are more promiscuous, says US research." - BBC News, 2005
>
>

Re: Nginx proxying to Apache: what about mod_rewrite?

I would think it would work from either. In apache, it would be in an
.htaccess file in the directory that you want the rewrite rule to
apply to.

On Wed, Sep 30, 2009 at 10:01 AM, illarra <nginx-forum@nginx.us> wrote:
> Hi!
>
> I have a website running on a Nginx + Apache configuration.
> Nginx serves the static content (in a different domain) and proxies directly to Apache, where all the PHP is executed.
>
> The problem is that my app uses mod_rewrite, and I can't find where I have to put the rules.
>
> This is a snippet of nginx.conf...
>
> server {
>    listen 0.0.0.0:80;
>    server_name domain.com;
>    location / {
>        proxy_pass http://127.0.0.1:8030;
>    }
> }
>
>
> And the Apache configuration...
>
>
> ServerName domain.com
> ServerAlias www.domain.com
> DocumentRoot /home/www
>
>    Options -Indexes IncludesNOEXEC FollowSymLinks -MultiViews
>    AllowOverride All
>    Order allow,deny
>    Allow from all
>
>
>
>
> The rewrite rule in Apache was this one, and it worked before I created the Nginx + Apache configuration.
>
> RewriteRule ^(.*)$ index.php?r=$1?%{QUERY_STRING}
>
>
> The question is, where should I put the rewrite condition? in Nginx (adapted, of course) or Apache?
>
> Thanks for having a look!
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10271#msg-10271
>
>
>

Nginx proxying to Apache: what about mod_rewrite?

Hi!

I have a website running on a Nginx + Apache configuration.
Nginx serves the static content (in a different domain) and proxies directly to Apache, where all the PHP is executed.

The problem is that my app uses mod_rewrite, and I can't find where I have to put the rules.

This is a snippet of nginx.conf...

server {
listen 0.0.0.0:80;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:8030;
}
}


And the Apache configuration...


ServerName domain.com
ServerAlias www.domain.com
DocumentRoot /home/www

Options -Indexes IncludesNOEXEC FollowSymLinks -MultiViews
AllowOverride All
Order allow,deny
Allow from all


The rewrite rule in Apache was this one, and it worked before I created the Nginx + Apache configuration.

RewriteRule ^(.*)$ index.php?r=$1?%{QUERY_STRING}


The question is, where should I put the rewrite condition? in Nginx (adapted, of course) or Apache?

Thanks for having a look!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10271,10271#msg-10271

Re: X-Accel-Redirect

thank you very much for helping!

X-Accel-Redirect header seems to work correctly ;)

I still have some things to fix but overall I'm very happy with the performance of nginx as frontend of apache.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10152,10233#msg-10233

RE: how to Grab arguments in POST request?

<9bca96020909300421w1ecac295g77dbacd0d3bc10a7@mail.gmail.com>
Content-Type: text/plain; charset="gb2312"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0


I'm sorry !

i already change it !

i find a argument in nginx,net $request_body
but it
> From: wmark+nginx@hurrikane.de
> Date: Wed, 30 Sep 2009 13:21:01 +0200
> Subject: Re: how to Grab arguments in POST request?
> To: nginx@sysoev.ru
>
>>> On Wed, 2009-09-30 at 15:23 +0800, dennis cao wrote:
>>>>
>>>> how to Grab arguments in POST request?
>>>>
>>> From: cliff@develix.com
>>>
>>> If you stop yelling, you might be able to sneak up on them.
>>>
> 2009/9/30 dennis cao <dennis__cao@hotmail.com>:
>> what does your mean?
>>
>
> ÇëÌîдÄãµÄµç×ÓÓʼþÒÔ´¿/Îı¾¡£
> лл
> (Please write your emails in plain text. Thank you.)
>
> --
> W-Mark Kubacki
> http://mark.ossdl.de/
>
_________________________________________________________________
Óò¿Âä¸ñ·ÖÏíÕÕƬ¡¢Ó°Òô¡¢È¤Î¶Ð¡¹¤¾ßºÍ×îÛÇå†Î£¬±MÇéÐã³öÄã×Ô¼º ¡ª Windows Live Spaces
http://home.spaces.live.com/?showUnauth=1&lc=1028

Re: X-Accel-Redirect

<quote who="Kiril Angov">

> X-Accel-Redirect is for sending files already in the hard drive, rather
> than dinamically generated ones.

It's designed for this, sure, but the way it's handled with nginx, you can
do any kind of Crazy Shit (tm) at the front end if you wish. :-)

location ^~ /protected {
proxy_pass http://www.disney.com/; # <- Crazy Shit (tm)
}

;-)

- Jeff

--
linux.conf.au 2010: Wellington, NZ http://www.lca2010.org.nz/

"Stupidity is used to run 98% of the world's corporations, which tops
UNIX server usage by quite a bit." - George Lebl

Re: RPC over HTTPS

Thanks for the response - just to be clear I have got outlook web access (OWA) working fine with nginx except for the 'Outlook Anywhere' functionality (was called RPC over HTTP(s)).

I think this is all related to the fact that the methods used (RPC_IN_DATA and RPC_OUT_DATA) create an artificially large content-length (1GB on the RPC_IN_DATA) to keep the connection open. Because nginx proxy tries to pre-fetch this (which will not complete) the end users' connection just times out. I have tried turning various buffering switches off but it still is not working. There is a really good explanation of the problem I have on an apache bug which has been re-opened (see http://209.85.229.132/search?q=cache:ejPagX7DOF8J:issues.apache.org/bugzilla/show_bug.cgi%3Fid%3D40029+rpc_in_data+%221073741824+bytes%22&hl=en&gl=uk&strip=1) so nginx is not alone.

I don;t understand if this is something that will never work in nginx/reverse proxies or if it is just down to configuration. I would be really interested to know if anyone has got RPC over HTTPS working through nginx. My debug log just shows the connection timing out while it is waiting for 1GB of data that it wont get!

cates.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3511,10195#msg-10195

Re: X-Accel-Redirect

X-Accel-Redirect is for sending files already in the hard drive,
rather than dinamically generated ones. The point is to do some logic
in a scripting language and to send a file to the user without them
knowing the real location on the server. Most of the time with:

location /protected {
internal;
}

On Wed, Sep 30, 2009 at 1:07 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> Hello!
>
> On Wed, Sep 30, 2009 at 06:59:26PM +1000, Jeff Waugh wrote:
>
>> <quote who="pepejose">
>>
>> > -- In the first case X-Accel-Redirect not working because the image is in
>> > memory?  -- If I change the second case to:
>> >
>> > header("Content-type: image/jpeg"); header("X-Accel-Redirect: /$image");
>> > readfile($image);
>> >
>> > how can I know if the header X-Accel-Redirect is working?
>>
>> The X-Accel-Redirect header basically says:
>>
>>   Dear nginx (or some other frontend),
>>
>>   I know you're talking to me with fastcgi or http (proxy), but now I've
>>   figured out that I'm just going to be sending the client a file. You can
>>   do this better than me (by talking directly to the kernel and disk, and
>>   not sending the file over the fastcgi or proxy connection), so here's the
>>   name of the file.
>>
>>   Love,
>>
>>   Backend Application
>>
>>
>> So, what you want to do in your backend is this:
>>
>>   header("Content-type: image/jpeg");
>>   if ( X_ACCEL_DIRECT ) { // this shoudl configurable in a productised app
>>     header("X-Accel-Redirect: /$image");
>>     exit;
>>   }
>>   readfile($image); // send the image if we're not using X-Accel-Redirect
>>
>>
>> The initial path should probably be mildly more unique though, so you can
>> deal with it separately in your nginx configuration. But not necessarily.
>>
>> Short answer: X-Accel-Redirect is helpful only when the file is on disk.
>
> Not really.  It may as well mean "Dear nginx, I'm really sorry but
> I have no file in question available here.  But I'm sure it should
> be available from this URI (that will map to different backend),
> try it."
>
> But sending file contents along with X-Accel-Redirect header is of
> course meaningless.
>
> Maxim Dounin
>
>>
>> :-)
>>
>> - Jeff
>>
>> --
>> linux.conf.au 2010: Wellington, NZ                http://www.lca2010.org.nz/
>>
>>   "First-born children are less creative but more stable, while last-born
>>          are more promiscuous, says US research." - BBC News, 2005
>>
>
>

Re: how to Grab arguments in POST request?

>> On Wed, 2009-09-30 at 15:23 +0800, dennis cao wrote:
>>>
>>> how to Grab arguments in POST request?
>>>
>> From: cliff@develix.com
>>
>> If you stop yelling, you might be able to sneak up on them.
>>
2009/9/30 dennis cao <dennis__cao@hotmail.com>:
> what does your mean?
>

请填写你的电子邮件以纯/文本。
谢谢
(Please write your emails in plain text. Thank you.)

--
W-Mark Kubacki
http://mark.ossdl.de/

Re: RPC over HTTPS

2009/9/30 cates <nginx-forum@nginx.us>:
> Did you ever get this working? I believe Apache and squid can both do this so I am not sure why nginx couldn't.
>
> Has anyone got RPC over HTTPS working with nginx as a reverse proxy??

What do you mean by RPC?

Sure GET, PUT, POST RESTful requests work as does SOAP.
Or, do you have the Microsoft RPC/port wrapping in mind which has to
be set up with Exchange e.g. for Outlook Web Access (OWA)? The latter
even does not work with Apache (for a workaround see [1]).

--
Mark
http://mark.ossdl.de/

[1] http://mark.ossdl.de/2009/01/iis-reverse-proxy-for-apache/

Re: X-Accel-Redirect

Hello!

On Wed, Sep 30, 2009 at 06:59:26PM +1000, Jeff Waugh wrote:

> <quote who="pepejose">
>
> > -- In the first case X-Accel-Redirect not working because the image is in
> > memory? -- If I change the second case to:
> >
> > header("Content-type: image/jpeg"); header("X-Accel-Redirect: /$image");
> > readfile($image);
> >
> > how can I know if the header X-Accel-Redirect is working?
>
> The X-Accel-Redirect header basically says:
>
> Dear nginx (or some other frontend),
>
> I know you're talking to me with fastcgi or http (proxy), but now I've
> figured out that I'm just going to be sending the client a file. You can
> do this better than me (by talking directly to the kernel and disk, and
> not sending the file over the fastcgi or proxy connection), so here's the
> name of the file.
>
> Love,
>
> Backend Application
>
>
> So, what you want to do in your backend is this:
>
> header("Content-type: image/jpeg");
> if ( X_ACCEL_DIRECT ) { // this shoudl configurable in a productised app
> header("X-Accel-Redirect: /$image");
> exit;
> }
> readfile($image); // send the image if we're not using X-Accel-Redirect
>
>
> The initial path should probably be mildly more unique though, so you can
> deal with it separately in your nginx configuration. But not necessarily.
>
> Short answer: X-Accel-Redirect is helpful only when the file is on disk.

Not really. It may as well mean "Dear nginx, I'm really sorry but
I have no file in question available here. But I'm sure it should
be available from this URI (that will map to different backend),
try it."

But sending file contents along with X-Accel-Redirect header is of
course meaningless.

Maxim Dounin

>
> :-)
>
> - Jeff
>
> --
> linux.conf.au 2010: Wellington, NZ http://www.lca2010.org.nz/
>
> "First-born children are less creative but more stable, while last-born
> are more promiscuous, says US research." - BBC News, 2005
>

Re: X-Accel-Redirect

<quote who="pepejose">

> in short, how can I verify that X-Accel-Redirect is working?
>
> I looked at the headers with the plugin for firefox live http headers but
> I see no differences

Add a header to an exclusive chunk of your PHP code or nginx configuration.
For example:

if ( X_ACCEL_REDIRECT ) {
header("X-Accel-Redirect: /$image"); // nginx will strip this
header("X-For-The-Win: win"); // nginx won't strip this though
exit;
}
readfile($image); // won't *ever* have the X-For-The-Win header

You could also do something on the nginx side, but this lets you know that
your app is at least doing the right thing.

If you're proxy_pass-ing to the app, you can also just use curl or wget to
talk to it directly, to see which headers it's passing to nginx.

(Note that you can configure nginx to do *anything* based on the URL it gets
from the backend -> you could proxy_pass again, fastcgi_pass, whatever. But
it is most useful when you tell nginx to just serve up a static file.)

- Jeff

--
linux.conf.au 2010: Wellington, NZ http://www.lca2010.org.nz/

You know the end is nigh when modern art is relegated to the status of
"meme".

Re: X-Accel-Redirect

Jeff Waugh Wrote:
-------------------------------------------------------
> >
> > -- In the first case X-Accel-Redirect not
> working because the image is in
> > memory? -- If I change the second case to:
> >
> > header("Content-type: image/jpeg");
> header("X-Accel-Redirect: /$image");
> > readfile($image);
> >
> > how can I know if the header X-Accel-Redirect is
> working?
>
> The X-Accel-Redirect header basically says:
>
> Dear nginx (or some other frontend),
>
> I know you're talking to me with fastcgi or http
> (proxy), but now I've
> figured out that I'm just going to be sending
> the client a file. You can
> do this better than me (by talking directly to
> the kernel and disk, and
> not sending the file over the fastcgi or proxy
> connection), so here's the
> name of the file.
>
> Love,
>
> Backend Application
>
>
> So, what you want to do in your backend is this:
>
> header("Content-type: image/jpeg");
> if ( X_ACCEL_DIRECT ) { // this shoudl
> configurable in a productised app
> header("X-Accel-Redirect: /$image");
> exit;
> }
> readfile($image); // send the image if we're not
> using X-Accel-Redirect
>
>
> The initial path should probably be mildly more
> unique though, so you can
> deal with it separately in your nginx
> configuration. But not necessarily.
>
> Short answer: X-Accel-Redirect is helpful only
> when the file is on disk.
>
> :-)
>
> - Jeff
>

many thanks to both for responding

in short, how can I verify that X-Accel-Redirect is working?

I looked at the headers with the plugin for firefox live http headers but I see no differences

thanks!!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10152,10170#msg-10170

Re: X-Accel-Redirect

<quote who="pepejose">

> -- In the first case X-Accel-Redirect not working because the image is in
> memory? -- If I change the second case to:
>
> header("Content-type: image/jpeg"); header("X-Accel-Redirect: /$image");
> readfile($image);
>
> how can I know if the header X-Accel-Redirect is working?

The X-Accel-Redirect header basically says:

Dear nginx (or some other frontend),

I know you're talking to me with fastcgi or http (proxy), but now I've
figured out that I'm just going to be sending the client a file. You can
do this better than me (by talking directly to the kernel and disk, and
not sending the file over the fastcgi or proxy connection), so here's the
name of the file.

Love,

Backend Application


So, what you want to do in your backend is this:

header("Content-type: image/jpeg");
if ( X_ACCEL_DIRECT ) { // this shoudl configurable in a productised app
header("X-Accel-Redirect: /$image");
exit;
}
readfile($image); // send the image if we're not using X-Accel-Redirect


The initial path should probably be mildly more unique though, so you can
deal with it separately in your nginx configuration. But not necessarily.

Short answer: X-Accel-Redirect is helpful only when the file is on disk.

:-)

- Jeff

--
linux.conf.au 2010: Wellington, NZ http://www.lca2010.org.nz/

"First-born children are less creative but more stable, while last-born
are more promiscuous, says US research." - BBC News, 2005

RE: how to Grab arguments in POST request?

what does your mean?

> Subject: Re: how to Grab arguments in POST request?
> From: cliff@develix.com
> To: nginx@sysoev.ru
> Date: Wed, 30 Sep 2009 01:24:36 -0700
>
> On Wed, 2009-09-30 at 15:23 +0800, dennis cao wrote:
> > Dear ALL:
> >
> > how to Grab arguments in POST request?
>
> If you stop yelling, you might be able to sneak up on them.
>
> Cliff
>
>


隨身的 Windows Live Messenger 和 Hotmail,不限時地掌握資訊盡在指間— Windows Live for Mobile

Re: (bug?)Timeout when proxy-pass 0 byte file

Hello!

On Wed, Sep 30, 2009 at 11:26:59AM +0800, tOmasEn wrote:

> here is my conf
>
> http{
> ...
> proxy_buffer_size 4k;
> proxy_buffers 1024 4k;
> proxy_temp_path /data/nginx/proxy_temp ;
> proxy_cache_path /data/nginx/proxy_cache levels=1:2
> keys_zone=cache1:1000m;
> ...
> server{
> ...
> location ~* \.(ico|css|js|gif|jp?g|png|xsl)$ {
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> proxy_set_header Host $http_host;
> proxy_redirect off;
> proxy_pass http://61.129.66.75:80;
> proxy_cache_key shtatic$request_uri;
> proxy_cache cache1;
> break;
> }
> ....
> }
>
> the initiall request when there isn't and cache, everything is ok. the
> following request to same url will wait until timeout.

Ok, so the problem is cache. Thanks, I'm able to reproduce it
here. I'll take a look later today how to fix it.

Maxim Dounin

[...]

>
> On Tue, Sep 29, 2009 at 10:48 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
>
> > Hello!
> >
> > On Tue, Sep 29, 2009 at 10:08:51PM +0800, tOmasEn wrote:
> >
> > > I been expirencing very slow page load when use nginx as frontend(with
> > > proxy_pass) for a while.
> > >
> > > After some test and debug, i found that it always timeout on response
> > > of 0 byte file.
> > >
> > > So i think there might be a bug when nginx running on proxy mode and
> > > serving 0 byte files. The frontend will consider there should be more
> > > data and wait until timeout or something like this.
> >
> > Could you please provide nginx -V output and debug log?
> >
> > Maxim Dounin
> >
> > >
> > > Btw. Nginx is great. Thanks
> > >
> > > tomasen
> > >
> > > --
> > > 从我的移动设备发送
> > >
> >
> >

Re: X-Accel-Redirect

x accel redirect also works with proxied url locations, in addition to
on disk. i don't know about in memory items.

On Wed, Sep 30, 2009 at 12:44 AM, pepejose <nginx-forum@nginx.us> wrote:
> hi! (sorry for my english)
>
> that header only be used with files that are on the hard disk, true ?
>
> because I have a script to resize the size of some images with php,
>
> in the script i use... getimagesize($image) , then, depending on the size:
>
> 1) $thumb=ImageCreateTrueColor($new_x,$new_y)
> $im=@imagecreatefromjpeg($image); imagecopyresampled($thumb,$im,0,0,0,0,$new_x,$new_y,$old_x,$old_y); header("Content-type: image/jpeg"); imagejpeg($thumb,'',70);
>
> or
>
> 2) header("Content-type: image/jpeg"); readfile($image);
>
>
> -- In the first case X-Accel-Redirect not working because the image is in memory?
> -- If I change the second case to:
>
> header("Content-type: image/jpeg"); header("X-Accel-Redirect: /$image"); readfile($image);
>
> how can I know if the header X-Accel-Redirect is working?
>
> thanks!!!!
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10152,10152#msg-10152
>
>
>

Re: how to Grab arguments in POST request?

On Wed, 2009-09-30 at 15:23 +0800, dennis cao wrote:
> Dear ALL:
>
> how to Grab arguments in POST request?

If you stop yelling, you might be able to sneak up on them.

Cliff

Re: RPC over HTTPS

Did you ever get this working? I believe Apache and squid can both do this so I am not sure why nginx couldn't.

Has anyone got RPC over HTTPS working with nginx as a reverse proxy??

Regs,
Cates.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,3511,10155#msg-10155

X-Accel-Redirect

hi! (sorry for my english)

that header only be used with files that are on the hard disk, true ?

because I have a script to resize the size of some images with php,

in the script i use... getimagesize($image) , then, depending on the size:

1) $thumb=ImageCreateTrueColor($new_x,$new_y)
$im=@imagecreatefromjpeg($image); imagecopyresampled($thumb,$im,0,0,0,0,$new_x,$new_y,$old_x,$old_y); header("Content-type: image/jpeg"); imagejpeg($thumb,'',70);

or

2) header("Content-type: image/jpeg"); readfile($image);


-- In the first case X-Accel-Redirect not working because the image is in memory?
-- If I change the second case to:

header("Content-type: image/jpeg"); header("X-Accel-Redirect: /$image"); readfile($image);

how can I know if the header X-Accel-Redirect is working?

thanks!!!!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,10152,10152#msg-10152

how to Grab arguments in POST request?

Dear ALL:

how to Grab arguments in POST request?



聰明搜尋和瀏覽網路的免費工具列 — Windows Live 工具列

Weird nginx/php-fpm issue

Hello,
I got a weird issue with the new php 5.2.11 and fpm patch. Until php
5.2.11, I was running php 5.2.10 with fpm patch and nginx 0.8.17 and
worked perfectly. Now I have upgrade to php 5.2.11 and of course added
the fpm patch and now this entry:

location / {
root /srv/www/cms;
index index.php index.html index.htm;
error_page 404 = /index.php;

}

doesn't work anymore. Remember, before with the same setup, but
different version of php it worked without any issue. I didn't change
anything, no php.ini configuration, no nginx configuration nothing,
besides the php and php-fpm upgrade.

Does anyone have an idea on why is this happening?
--
Posted via http://www.ruby-forum.com/.

2009年9月29日星期二

Re: "bus error" on Linux Sparc

On 30.09.2009, at 8:46, Igor Sysoev <is@rambler-co.ru> wrote:

> On Wed, Sep 30, 2009 at 12:12:26AM +0400, Igor Sysoev wrote:
>
>> On Tue, Sep 29, 2009 at 12:05:37PM +0400, Igor Sysoev wrote:
>>
>>> On Sat, Sep 26, 2009 at 08:38:36PM -0400, marcusramberg wrote:
>>>
>>>> Hey
>>>>
>>>> Did you get anywhere with this issue? I am experiencing it as
>>>> well with a new web node I'm trying to set up for iusethis.com,
>>>> on a Sun T1000 running Debian.
>>>
>>> The bug has happened some time before the "bus error" occurs.
>>> It's not easy to find the cuase by gdb back trace in this case.
>>> If anyone can give me access to Sparc Debian box where this error
>>> can
>>> be reproduced I will fix it much more quickly.
Is there a way to reproduce this type of error running just an
emulator? If it is, I can with a plesure set up continuos testing
cycle based on the test set by Maxim Dounin.

There is TestSwarm solving this problem for JavaScript developers,
and it works great :)

>>
>> The attached patch should fix the bug.
>
> The updated patch.
>
>
> --
> Igor Sysoev
> http://sysoev.ru/en/
> <patch.sparc.linux1.txt>

Re: "bus error" on Linux Sparc

Index: src/core/nginx.c
===================================================================
--- src/core/nginx.c (revision 2494)
+++ src/core/nginx.c (working copy)
@@ -280,6 +280,9 @@
init_cycle.log = log;
ngx_cycle = &init_cycle;

+ /* dummy pagesize to create aligned pool */
+ ngx_pagesize = 1024;
+
init_cycle.pool = ngx_create_pool(1024, log);
if (init_cycle.pool == NULL) {
return 1;
Index: src/core/ngx_palloc.c
===================================================================
--- src/core/ngx_palloc.c (revision 2494)
+++ src/core/ngx_palloc.c (working copy)
@@ -17,7 +17,7 @@
{
ngx_pool_t *p;

- p = ngx_alloc(size, log);
+ p = ngx_memalign(ngx_pagesize, size, log);
if (p == NULL) {
return NULL;
}
@@ -181,7 +181,7 @@

psize = (size_t) (pool->d.end - (u_char *) pool);

- m = ngx_alloc(psize, pool->log);
+ m = ngx_memalign(ngx_pagesize, psize, pool->log);
if (m == NULL) {
return NULL;
}
@@ -219,7 +219,7 @@
ngx_uint_t n;
ngx_pool_large_t *large;

- p = ngx_alloc(size, pool->log);
+ p = ngx_memalign(ngx_pagesize, size, pool->log);
if (p == NULL) {
return NULL;
}
On Wed, Sep 30, 2009 at 12:12:26AM +0400, Igor Sysoev wrote:

> On Tue, Sep 29, 2009 at 12:05:37PM +0400, Igor Sysoev wrote:
>
> > On Sat, Sep 26, 2009 at 08:38:36PM -0400, marcusramberg wrote:
> >
> > > Hey
> > >
> > > Did you get anywhere with this issue? I am experiencing it as well with a new web node I'm trying to set up for iusethis.com, on a Sun T1000 running Debian.
> >
> > The bug has happened some time before the "bus error" occurs.
> > It's not easy to find the cuase by gdb back trace in this case.
> > If anyone can give me access to Sparc Debian box where this error can
> > be reproduced I will fix it much more quickly.
>
> The attached patch should fix the bug.

The updated patch.


--
Igor Sysoev
http://sysoev.ru/en/

Re: (bug?)Timeout when proxy-pass 0 byte file


here is my conf

http{
   ...
        proxy_buffer_size 4k;
        proxy_buffers 1024 4k;
        proxy_temp_path /data/nginx/proxy_temp ;
        proxy_cache_path  /data/nginx/proxy_cache  levels=1:2   keys_zone=cache1:1000m;
    ...   
server{
    ...
   location ~* \.(ico|css|js|gif|jp?g|png|xsl)$ {
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
                      proxy_set_header Host $http_host;
                      proxy_redirect off;
                        proxy_pass http://61.129.66.75:80;
                        proxy_cache_key shtatic$request_uri;
                        proxy_cache cache1;
                        break;
        }
   ....
}

the initiall request when there isn't and cache, everything is ok. the following request to same url will wait until timeout.

this is the debug log around the second request:

2009/09/30 10:53:57 [debug] 6733#0: event timer del: -1: 1254279237061
2009/09/30 10:53:57 [debug] 6733#0: http file cache expire
2009/09/30 10:53:57 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:53:57 [debug] 6733#0: http file cache size: 1
2009/09/30 10:53:57 [debug] 6733#0: event timer add: -1: 10000:1254279247061
2009/09/30 10:53:57 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:53:57 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:54:07 [debug] 6733#0: timer delta: 10000
2009/09/30 10:54:07 [debug] 6733#0: event timer del: -1: 1254279247061
2009/09/30 10:54:07 [debug] 6733#0: http file cache expire
2009/09/30 10:54:07 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:54:07 [debug] 6733#0: http file cache size: 1
2009/09/30 10:54:07 [debug] 6733#0: event timer add: -1: 10000:1254279257061
2009/09/30 10:54:07 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:54:07 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:54:15 [debug] 6732#0: timer delta: 38923
2009/09/30 10:54:15 [debug] 6732#0: posted events 0000000000000000
2009/09/30 10:54:15 [debug] 6732#0: worker cycle
2009/09/30 10:54:15 [debug] 6732#0: epoll timer: 1
2009/09/30 10:54:15 [debug] 6732#0: timer delta: 4
2009/09/30 10:54:15 [debug] 6732#0: *4 event timer del: 13: 1254279255991
2009/09/30 10:54:15 [debug] 6732#0: *4 http keepalive handler
2009/09/30 10:54:15 [debug] 6732#0: *4 close http connection: 13
2009/09/30 10:54:15 [debug] 6732#0: *4 free: 000000000BC58070
2009/09/30 10:54:15 [debug] 6732#0: *4 free: 0000000000000000
2009/09/30 10:54:15 [debug] 6732#0: *4 free: 000000000BB9B890, unused: 8
2009/09/30 10:54:15 [debug] 6732#0: *4 free: 000000000BB9BAE0, unused: 128
2009/09/30 10:54:15 [debug] 6732#0: posted events 0000000000000000
2009/09/30 10:54:15 [debug] 6732#0: worker cycle
2009/09/30 10:54:15 [debug] 6732#0: epoll timer: -1
2009/09/30 10:54:17 [debug] 6733#0: timer delta: 10001
2009/09/30 10:54:17 [debug] 6733#0: event timer del: -1: 1254279257061
2009/09/30 10:54:17 [debug] 6733#0: http file cache expire
2009/09/30 10:54:17 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:54:17 [debug] 6733#0: http file cache size: 1
2009/09/30 10:54:17 [debug] 6733#0: event timer add: -1: 10000:1254279267063
2009/09/30 10:54:17 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:54:17 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:54:27 [debug] 6733#0: timer delta: 10000
2009/09/30 10:54:27 [debug] 6733#0: event timer del: -1: 1254279267063
2009/09/30 10:54:27 [debug] 6733#0: http file cache expire
2009/09/30 10:54:27 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:54:27 [debug] 6733#0: http file cache size: 1
2009/09/30 10:54:27 [debug] 6733#0: event timer add: -1: 10000:1254279277063
2009/09/30 10:54:27 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:54:27 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:54:37 [debug] 6733#0: timer delta: 10001
2009/09/30 10:54:37 [debug] 6733#0: event timer del: -1: 1254279277063
2009/09/30 10:54:37 [debug] 6733#0: http file cache expire
2009/09/30 10:54:37 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:54:37 [debug] 6733#0: http file cache size: 1
2009/09/30 10:54:37 [debug] 6733#0: event timer add: -1: 10000:1254279287064
2009/09/30 10:54:37 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:54:37 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:54:47 [debug] 6733#0: timer delta: 10000
2009/09/30 10:54:47 [debug] 6733#0: event timer del: -1: 1254279287064
2009/09/30 10:54:47 [debug] 6733#0: http file cache expire
2009/09/30 10:54:47 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:54:47 [debug] 6733#0: http file cache size: 1
2009/09/30 10:54:47 [debug] 6733#0: event timer add: -1: 10000:1254279297064
2009/09/30 10:54:47 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:54:47 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:54:57 [debug] 6733#0: timer delta: 10000
2009/09/30 10:54:57 [debug] 6733#0: event timer del: -1: 1254279297064
2009/09/30 10:54:57 [debug] 6733#0: http file cache expire
2009/09/30 10:54:57 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:54:57 [debug] 6733#0: http file cache size: 1
2009/09/30 10:54:57 [debug] 6733#0: event timer add: -1: 10000:1254279307064
2009/09/30 10:54:57 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:54:57 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:55:07 [debug] 6733#0: timer delta: 10002
2009/09/30 10:55:07 [debug] 6733#0: event timer del: -1: 1254279307064
2009/09/30 10:55:07 [debug] 6733#0: http file cache expire
2009/09/30 10:55:07 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:55:07 [debug] 6733#0: http file cache size: 1
2009/09/30 10:55:07 [debug] 6733#0: event timer add: -1: 10000:1254279317066
2009/09/30 10:55:07 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:55:07 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:55:17 [debug] 6733#0: timer delta: 10000
2009/09/30 10:55:17 [debug] 6733#0: event timer del: -1: 1254279317066
2009/09/30 10:55:17 [debug] 6733#0: http file cache expire
2009/09/30 10:55:17 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:55:17 [debug] 6733#0: http file cache size: 1
2009/09/30 10:55:17 [debug] 6733#0: event timer add: -1: 10000:1254279327066
2009/09/30 10:55:17 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:55:17 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:55:27 [debug] 6733#0: timer delta: 10001
2009/09/30 10:55:27 [debug] 6733#0: event timer del: -1: 1254279327066
2009/09/30 10:55:27 [debug] 6733#0: http file cache expire
2009/09/30 10:55:27 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:55:27 [debug] 6733#0: http file cache size: 1
2009/09/30 10:55:27 [debug] 6733#0: event timer add: -1: 10000:1254279337067
2009/09/30 10:55:27 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:55:27 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:55:37 [debug] 6733#0: timer delta: 10000
2009/09/30 10:55:37 [debug] 6733#0: event timer del: -1: 1254279337067
2009/09/30 10:55:37 [debug] 6733#0: http file cache expire
2009/09/30 10:55:37 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:55:37 [debug] 6733#0: http file cache size: 1
2009/09/30 10:55:37 [debug] 6733#0: event timer add: -1: 10000:1254279347067
2009/09/30 10:55:37 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:55:37 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:55:47 [debug] 6733#0: timer delta: 10001
2009/09/30 10:55:47 [debug] 6733#0: event timer del: -1: 1254279347067
2009/09/30 10:55:47 [debug] 6733#0: http file cache expire
2009/09/30 10:55:47 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:55:47 [debug] 6733#0: http file cache size: 1
2009/09/30 10:55:47 [debug] 6733#0: event timer add: -1: 10000:1254279357068
2009/09/30 10:55:47 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:55:47 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:55:57 [debug] 6733#0: timer delta: 10001
2009/09/30 10:55:57 [debug] 6733#0: event timer del: -1: 1254279357068
2009/09/30 10:55:57 [debug] 6733#0: http file cache expire
2009/09/30 10:55:57 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:55:57 [debug] 6733#0: http file cache size: 1
2009/09/30 10:55:57 [debug] 6733#0: event timer add: -1: 10000:1254279367069
2009/09/30 10:55:57 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:55:57 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:56:07 [debug] 6733#0: timer delta: 10000
2009/09/30 10:56:07 [debug] 6733#0: event timer del: -1: 1254279367069
2009/09/30 10:56:07 [debug] 6733#0: http file cache expire
2009/09/30 10:56:07 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:56:07 [debug] 6733#0: http file cache size: 1
2009/09/30 10:56:07 [debug] 6733#0: event timer add: -1: 10000:1254279377069
2009/09/30 10:56:07 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:56:07 [debug] 6733#0: epoll timer: 10000
2009/09/30 10:56:17 [debug] 6733#0: timer delta: 10001
2009/09/30 10:56:17 [debug] 6733#0: event timer del: -1: 1254279377069
2009/09/30 10:56:17 [debug] 6733#0: http file cache expire
2009/09/30 10:56:17 [debug] 6733#0: malloc: 000000000BB9B890:62
2009/09/30 10:56:17 [debug] 6733#0: http file cache size: 1
2009/09/30 10:56:17 [debug] 6733#0: event timer add: -1: 10000:1254279387070
2009/09/30 10:56:17 [debug] 6733#0: posted events 0000000000000000
2009/09/30 10:56:17 [debug] 6733#0: epoll timer: 10000

and -V output:
--with-http_ssl_module --with-md5-asm --with-sha1-asm --with-http_xslt_module --add-module=/home/nginx_uploadprogress_module --with-debug

On Tue, Sep 29, 2009 at 10:48 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
Hello!

On Tue, Sep 29, 2009 at 10:08:51PM +0800, tOmasEn wrote:

> I been expirencing very slow page load when use nginx as frontend(with
> proxy_pass) for a while.
>
> After some test and debug, i found that it always timeout on response
> of 0 byte file.
>
> So i think there might be a bug when nginx running on proxy mode and
> serving 0 byte files. The frontend will consider there should be more
> data and wait until timeout or something like this.

Could you please provide nginx -V output and debug log?

Maxim Dounin

>
> Btw. Nginx is great. Thanks
>
> tomasen
>
> --
> 从我的移动设备发送
>


Re: "bus error" on Linux Sparc

Index: src/core/nginx.c
===================================================================
--- src/core/nginx.c (revision 2494)
+++ src/core/nginx.c (working copy)
@@ -280,6 +280,9 @@
init_cycle.log = log;
ngx_cycle = &init_cycle;

+ /* dummy pagesize to create aligned pool */
+ ngx_pagesize = 1024;
+
init_cycle.pool = ngx_create_pool(1024, log);
if (init_cycle.pool == NULL) {
return 1;
Index: src/core/ngx_palloc.c
===================================================================
--- src/core/ngx_palloc.c (revision 2494)
+++ src/core/ngx_palloc.c (working copy)
@@ -17,7 +17,7 @@
{
ngx_pool_t *p;

- p = ngx_alloc(size, log);
+ p = ngx_memalign(ngx_pagesize, size, log);
if (p == NULL) {
return NULL;
}
On Tue, Sep 29, 2009 at 12:05:37PM +0400, Igor Sysoev wrote:

> On Sat, Sep 26, 2009 at 08:38:36PM -0400, marcusramberg wrote:
>
> > Hey
> >
> > Did you get anywhere with this issue? I am experiencing it as well with a new web node I'm trying to set up for iusethis.com, on a Sun T1000 running Debian.
>
> The bug has happened some time before the "bus error" occurs.
> It's not easy to find the cuase by gdb back trace in this case.
> If anyone can give me access to Sparc Debian box where this error can
> be reproduced I will fix it much more quickly.

The attached patch should fix the bug.


--
Igor Sysoev
http://sysoev.ru/en/

Re: upstream timeouts

Never mind this question.

I noticed that we had a "proxy_next_upstream timeout" line in the config file which was causing nginx to not resend the request to the next upstream when one upstream was completely down!

-M



From: Mohammad Kolahdouzan <mohammad_ysm@yahoo.com>
To: nginx@sysoev.ru
Sent: Monday, September 28, 2009 6:10:41 PM
Subject: upstream timeouts

I got a quick question.

If I define a couple of servers in an upstream, and one of them fails, does Nginx re-send the request(s) for which a response from that upstream was not received to the other server, or is that request doomed?

Thanks,
-M


Re: why nginx just use 511 for listen backlog

On Tue, Sep 29, 2009 at 11:32:32AM -0400, zhijianpeng wrote:

> I found that :
>
> #define NGX_LISTEN_BACKLOG 511
>
> and use it as the backlog of listen()
>
> ls.backlog = NGX_LISTEN_BACKLOG;
> if (listen(s, ls.backlog) == -1) {
>
> Does it mean that only 511 connection could be accept at the same time ?

No, it means that up to 511 connections can be queued in kernel listen
queue. 511 is just a safe limit for the most OSes. For FreeBSD it's -1,
i.e., value of sysctl kern.ipc.somaxconn.

> May I modify it to 1024 or higher( I am sure use it less than SOMAXCONN )

listen 80 default backlog=1024;


--
Igor Sysoev
http://sysoev.ru/en/

why nginx just use 511 for listen backlog

I found that :

#define NGX_LISTEN_BACKLOG 511

and use it as the backlog of listen()

ls.backlog = NGX_LISTEN_BACKLOG;
if (listen(s, ls.backlog) == -1) {

Does it mean that only 511 connection could be accept at the same time ?
May I modify it to 1024 or higher( I am sure use it less than SOMAXCONN )

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9959,9959#msg-9959

Re: Connection error

Thanks a lot for your reply !
But even when i increase worker process to 15, then too i am getting same trend.
And i am not using any php acclerator for this.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9927,9951#msg-9951

Re: Connection error

Hello!

On Tue, Sep 29, 2009 at 10:59:23AM -0400, chaitanya wrote:

> i have to call one php page at rate of around 500-1000 Request per second & that is from single IP. But i am getting strange output My page works well 5 times, then it gives error for next 5 requests & this trend continues.
>
> I am not getting what could be the cause of this strange error?

Looks like you have 5 php workers, and each of them is able to
process one request, but dies on the second. Probably some
php accelerator related issue.

Maxim Dounin

Re: Connection error

i have to call one php page at rate of around 500-1000 Request per second & that is from single IP. But i am getting strange output My page works well 5 times, then it gives error for next 5 requests & this trend continues.

I am not getting what could be the cause of this strange error?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9927,9946#msg-9946

Re: Connection error

Hello!

On Tue, Sep 29, 2009 at 10:21:57AM -0400, chaitanya wrote:

> Hi
>
> I am getting following error in nginx+PHP FastCGI. I searched this error on google but not getting any useful link.
> Please guide what could be the cause of this error.
>
> 2009/09/29 10:01:34 2156#0: *120 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.0.1.101, server: localhost, request: "GET /helloworld.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "10.0.1.71"

Most likely your fastcgi application (php) has been died.

Maxim Dounin

Re: (bug?)Timeout when proxy-pass 0 byte file

Hello!

On Tue, Sep 29, 2009 at 10:08:51PM +0800, tOmasEn wrote:

> I been expirencing very slow page load when use nginx as frontend(with
> proxy_pass) for a while.
>
> After some test and debug, i found that it always timeout on response
> of 0 byte file.
>
> So i think there might be a bug when nginx running on proxy mode and
> serving 0 byte files. The frontend will consider there should be more
> data and wait until timeout or something like this.

Could you please provide nginx -V output and debug log?

Maxim Dounin

>
> Btw. Nginx is great. Thanks
>
> tomasen
>
> --
> 从我的移动设备发送
>

Connection error

Hi

I am getting following error in nginx+PHP FastCGI. I searched this error on google but not getting any useful link.
Please guide what could be the cause of this error.

2009/09/29 10:01:34 2156#0: *120 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.0.1.101, server: localhost, request: "GET /helloworld.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "10.0.1.71"

Thanks a lot !

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9927,9927#msg-9927

(bug?)Timeout when proxy-pass 0 byte file

I been expirencing very slow page load when use nginx as frontend(with
proxy_pass) for a while.

After some test and debug, i found that it always timeout on response
of 0 byte file.

So i think there might be a bug when nginx running on proxy mode and
serving 0 byte files. The frontend will consider there should be more
data and wait until timeout or something like this.

Btw. Nginx is great. Thanks

tomasen

--
从我的移动设备发送

Re: shared memory zone "media" conflicts with already declared size 0

Hello!

On Tue, Sep 29, 2009 at 04:05:23PM +0300, Gena Makhomed wrote:

> On Tuesday, September 29, 2009 at 13:32:10, Maxim Dounin wrote:
>
> MD> Globs can be used *only* for completely independent files
> MD> (e.g. containing definitions of different server{} blocks).
>
> may be config test should generate warning "Undefined behavior"
> if wildcard includes not containing completely independent files ?

You never know if included chunks are dependent or not. E.g. the
following lines are completely independent given that $var defined
somewhere before (assume each line in it's own include):

rewrite ^/blah$ /$var/blah;
rewrite ^/oops$ /$var/oops;

while these aren't:

rewrite ^/blah$ /$var/blah;
set $var "something"
rewrite ^/oops$ /$var/oops;

Not even talking about more complex cases.

Maxim Dounin

Re: shared memory zone "media" conflicts with already declared size 0

On Tue, Sep 29, 2009 at 03:15:21PM +0400, Maxim Dounin wrote:

> Hello!
>
> On Tue, Sep 29, 2009 at 01:11:17PM +0400, Igor Sysoev wrote:
>
> > On Tue, Sep 29, 2009 at 10:57:02AM +0200, Tomasz Pajor wrote:
> >
> > > >
> > > >> what seems to be the problem?
> > > >>
> > > >> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
> > > >> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
> > > >> configuration file /etc/nginx/nginx.conf test failed
> > > >>
> > > >
> > > > This may be caused if "proxy_cache media" is included before proxy_cache_path.
> > > >
> > > this is true,
> > > in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf, and what
> > > it seems the ssl.conf is loaded first, shouldn't the files be loaded in
> > > alphabetical order?
> >
> > Currently, files are unsorted.
> > Probably, it should be changed to alphabetical order, but I'm not sure.
>
> BTW, have you seen any configurations that may seriously suffer
> from changing this to alpabetical order?
>
> As far as I understand performance drop will be noticeable
> somewhere near 10k+ includes, while serious problems are unlikely
> to happen before something like 100k+...

The single stopper now is Win32: FindFirstFile/FindNextFile may return
unordered files.


--
Igor Sysoev
http://sysoev.ru/en/

Re: shared memory zone "media" conflicts with already declared size 0

On Tuesday, September 29, 2009 at 13:32:10, Maxim Dounin wrote:

MD> Globs can be used *only* for completely independent files
MD> (e.g. containing definitions of different server{} blocks).

may be config test should generate warning "Undefined behavior"
if wildcard includes not containing completely independent files ?

may be only with -w command line switch,
analogue of perl "use warnings;" pragma:

$ perl -h | grep -- -w
-w enable many useful warnings (RECOMMENDED)

--
Best regards,
Gena

Re: shared memory zone "media" conflicts with already declared size 0

Hello!

On Tue, Sep 29, 2009 at 01:11:17PM +0400, Igor Sysoev wrote:

> On Tue, Sep 29, 2009 at 10:57:02AM +0200, Tomasz Pajor wrote:
>
> > >
> > >> what seems to be the problem?
> > >>
> > >> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
> > >> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
> > >> configuration file /etc/nginx/nginx.conf test failed
> > >>
> > >
> > > This may be caused if "proxy_cache media" is included before proxy_cache_path.
> > >
> > this is true,
> > in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf, and what
> > it seems the ssl.conf is loaded first, shouldn't the files be loaded in
> > alphabetical order?
>
> Currently, files are unsorted.
> Probably, it should be changed to alphabetical order, but I'm not sure.

BTW, have you seen any configurations that may seriously suffer
from changing this to alpabetical order?

As far as I understand performance drop will be noticeable
somewhere near 10k+ includes, while serious problems are unlikely
to happen before something like 100k+...

Maxim Dounin

Re: shared memory zone "media" conflicts with already declared size 0

Maxim Dounin wrote:
> Hello!
>
> On Tue, Sep 29, 2009 at 11:41:15AM +0200, Tomasz Pajor wrote:
>
>
>>>>>> what seems to be the problem?
>>>>>>
>>>>>> [emerg]: the size 52428800 of shared memory zone "media"
>>>>>> conflicts with already declared size 0 in
>>>>>> /etc/nginx/conf.d/cache.conf:5
>>>>>> configuration file /etc/nginx/nginx.conf test failed
>>>>>>
>>>>> This may be caused if "proxy_cache media" is included before proxy_cache_path.
>>>>>
>>>> this is true,
>>>> in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf,
>>>> and what it seems the ssl.conf is loaded first, shouldn't the
>>>> files be loaded in alphabetical order?
>>>>
>>> Currently, files are unsorted.
>>> Probably, it should be changed to alphabetical order, but I'm not sure
>>>
>> I don't know, never the less the desired order can be achieved by
>> adding digits in front of the file name, 001-cache.conf,
>> 002-ssl.conf fixed my issue.
>>
>
> No, desired order can't be achieved by adding digits. While using
> globbed includes nginx loads them without any sorting, i.e. in
> order how filesystem returns them.
>
> You happened to get correct order after your renames, but things
> may again go wild at any moment (most likely when you'll touch
> something in this directory).
>
> Currently the only solution is to include dependent files
> explicitly. Globs can be used *only* for completely independent
> files (e.g. containing definitions of different server{} blocks)
So maybe it is a good idea to add alphabetic order for the wildcard include.

Re: shared memory zone "media" conflicts with already declared size 0

Hello!

On Tue, Sep 29, 2009 at 11:41:15AM +0200, Tomasz Pajor wrote:

>
> >>>>what seems to be the problem?
> >>>>
> >>>>[emerg]: the size 52428800 of shared memory zone "media"
> >>>>conflicts with already declared size 0 in
> >>>>/etc/nginx/conf.d/cache.conf:5
> >>>>configuration file /etc/nginx/nginx.conf test failed
> >>>This may be caused if "proxy_cache media" is included before proxy_cache_path.
> >>this is true,
> >>in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf,
> >>and what it seems the ssl.conf is loaded first, shouldn't the
> >>files be loaded in alphabetical order?
> >
> >Currently, files are unsorted.
> >Probably, it should be changed to alphabetical order, but I'm not sure
> I don't know, never the less the desired order can be achieved by
> adding digits in front of the file name, 001-cache.conf,
> 002-ssl.conf fixed my issue.

No, desired order can't be achieved by adding digits. While using
globbed includes nginx loads them without any sorting, i.e. in
order how filesystem returns them.

You happened to get correct order after your renames, but things
may again go wild at any moment (most likely when you'll touch
something in this directory).

Currently the only solution is to include dependent files
explicitly. Globs can be used *only* for completely independent
files (e.g. containing definitions of different server{} blocks).

Maxim Dounin

Re: [Patch] nginx to use libatomic_ops

2009/9/29 Igor Sysoev <is@rambler-co.ru>:
> On Fri, Sep 25, 2009 at 09:46:33PM +0200, W-Mark Kubacki wrote:
>
>>
>> I have experienced SEGFAULTs on ARM using fastcgi and discovered it
>> compiles with "NGX_HAVE_ATOMIC_OPS 0" on 'other' architectures than
>> x86, amd64, sparc and the such defined in src/os/unix/ngx_atomic.h.
>>
>> Therefore I'd like to contribute the patch linked below [1], which
>> introduces configure option "--with-libatomic" [...]
>
> Thank you for the patch and information about gcc 4.1.
> I'm going to add gcc builtins, since they are slighty lesser than
> my code at least on x86.
> I'm not sure should libatomic be added, since gcc 4.1+ is common compiler
> these days.
> Could you show backtrace of the segfault ? I could not reproduce it
> on x86 with disabled atomic ops.

The segfaults happened on ARM architecture, not x86.
Eliminating lock_file by introducing atomic ops the only code which
got not called after that were the emulations at the bottom of
ngx_atomic.h.

GCC builtins work for me, too, but they make nginx be linked against a
specific libgcc (that of the particular GCC version) and yield more
overhead (2kb libatomic_ops w/o any link vs. 13kb GCC builtins).
Finally libatomic_ops can be compiled by other means (such as MSVC,
ICC, SUNC for example) therefore I prefer them for portability.

What do you think about the compromise using GCC builtins if
--with-libatomic is not set?
Only architectures not covered by the #ifs would be affected and you
could remove lock_file code entirely in future.

--
W-Mark Kubacki
http://mark.ossdl.de/

Re: shared memory zone "media" conflicts with already declared size 0

On Tue, Sep 29, 2009 at 11:41:15AM +0200, Tomasz Pajor wrote:

>
> >>>> what seems to be the problem?
> >>>>
> >>>> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
> >>>> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
> >>>> configuration file /etc/nginx/nginx.conf test failed
> >>>>
> >>>>
> >>> This may be caused if "proxy_cache media" is included before proxy_cache_path.
> >>>
> >>>
> >> this is true,
> >> in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf, and what
> >> it seems the ssl.conf is loaded first, shouldn't the files be loaded in
> >> alphabetical order?
> >>
> >
> > Currently, files are unsorted.
> > Probably, it should be changed to alphabetical order, but I'm not sure
> I don't know, never the less the desired order can be achieved by adding
> digits in front of the file name, 001-cache.conf, 002-ssl.conf fixed my
> issue.

Probably, names were reordered in the directory while renaming.


--
Igor Sysoev
http://sysoev.ru/en/

Re: shared memory zone "media" conflicts with already declared size 0

>>>> what seems to be the problem?
>>>>
>>>> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
>>>> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
>>>> configuration file /etc/nginx/nginx.conf test failed
>>>>
>>>>
>>> This may be caused if "proxy_cache media" is included before proxy_cache_path.
>>>
>>>
>> this is true,
>> in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf, and what
>> it seems the ssl.conf is loaded first, shouldn't the files be loaded in
>> alphabetical order?
>>
>
> Currently, files are unsorted.
> Probably, it should be changed to alphabetical order, but I'm not sure
I don't know, never the less the desired order can be achieved by adding
digits in front of the file name, 001-cache.conf, 002-ssl.conf fixed my
issue.

Re: Serving Static Files

Hi Igor,

I hope you find some time checking the configurations I've posted and let me know of the possible solutions on my problem.

Thank you so much.

--
Bernard

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2700,9838#msg-9838

Re: shared memory zone "media" conflicts with already declared size 0

On Tue, Sep 29, 2009 at 10:57:02AM +0200, Tomasz Pajor wrote:

> >
> >> what seems to be the problem?
> >>
> >> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
> >> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
> >> configuration file /etc/nginx/nginx.conf test failed
> >>
> >
> > This may be caused if "proxy_cache media" is included before proxy_cache_path.
> >
> this is true,
> in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf, and what
> it seems the ssl.conf is loaded first, shouldn't the files be loaded in
> alphabetical order?

Currently, files are unsorted.
Probably, it should be changed to alphabetical order, but I'm not sure.


--
Igor Sysoev
http://sysoev.ru/en/

Re: shared memory zone "media" conflicts with already declared size 0

>
>> what seems to be the problem?
>>
>> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
>> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
>> configuration file /etc/nginx/nginx.conf test failed
>>
>
> This may be caused if "proxy_cache media" is included before proxy_cache_path.
>
this is true,
in /etc/nginx/conf.d i have two files, cache.conf and ssl.conf, and what
it seems the ssl.conf is loaded first, shouldn't the files be loaded in
alphabetical order?

Re: [Patch] nginx to use libatomic_ops

On Fri, Sep 25, 2009 at 09:46:33PM +0200, W-Mark Kubacki wrote:

> Dear developers,
>
> I have experienced SEGFAULTs on ARM using fastcgi and discovered it
> compiles with "NGX_HAVE_ATOMIC_OPS 0" on 'other' architectures than
> x86, amd64, sparc and the such defined in src/os/unix/ngx_atomic.h.
>
> Therefore I'd like to contribute the patch linked below [1], which
> introduces configure option "--with-libatomic" and which makes nginx
> use atomic operations of that library on these 'other' architectures.
> For more information on the library please see [2]. (Indeed, this does
> not result in additional runtime dependencies and the atomic ops
> compile to less code than those of newer GCC versions [3].)
>
> --with-libatomic compiles on ARM, does no SEGFAULT, and yields higher
> requests per second than with the otherwise used lock file.
>
> The option is disabled by default, and even if enabled still used as
> last resort. I hope that after a brief review (I have little
> experience with configure scripts) you can integrate it in the next
> release.
>
> --
> W-Mark Kubacki
> http://mark.ossdl.de/
>
> [1] https://svn.hurrikane.de/all/ossdl/www-servers/nginx/files/nginx-0.8.16-libatomic.patch
> [2] http://www.hpl.hp.com/research/linux/atomic_ops/
> http://bdwgc.cvs.sourceforge.net/viewvc/bdwgc/bdwgc/libatomic_ops-1.2/
> [3] http://gcc.gnu.org/onlinedocs/gcc-4.1.0/gcc/Atomic-Builtins.html
> for gcc 4.1.0 and later

Thank you for the patch and information about gcc 4.1.
I'm going to add gcc builtins, since they are slighty lesser than
my code at least on x86.
I'm not sure should libatomic be added, since gcc 4.1+ is common compiler
these days.
Could you show backtrace of the segfault ? I could not reproduce it
on x86 with disabled atomic ops.


--
Igor Sysoev
http://sysoev.ru/en/

Re: "bus error" on Linux Sparc

On Sat, Sep 26, 2009 at 08:38:36PM -0400, marcusramberg wrote:

> Hey
>
> Did you get anywhere with this issue? I am experiencing it as well with a new web node I'm trying to set up for iusethis.com, on a Sun T1000 running Debian.

The bug has happened some time before the "bus error" occurs.
It's not easy to find the cuase by gdb back trace in this case.
If anyone can give me access to Sparc Debian box where this error can
be reproduced I will fix it much more quickly.


--
Igor Sysoev
http://sysoev.ru/en/

Re: limit_req truncating responses

On Tue, Sep 29, 2009 at 11:37:43AM +0400, Maxim Dounin wrote:

> Hello!
>
> On Tue, Sep 29, 2009 at 12:11:02AM -0400, brianf wrote:
>
> > Just a follow up to some more observed behavior:
> >
> > It seems that if I set the burst number low enough to trigger 503's with my tests, the system continuously returns 503 until i restart nginx. Is this another known bug?
>
> Current implementation counts requests against rate limit even if
> it returns 503. So if you have
>
> limit_req_zone ... rate=1r/s;
> limit_req burst=10;
>
> and did 100 requests at a time (90 of which returned 503) - you
> have to wait 100 seconds until next request will be allowed.
>
> And yes, it's known issue.

I'm going to fix this in 0.8.18: to implement right leaky bucket algorithm.


--
Igor Sysoev
http://sysoev.ru/en/

Re: limit_req truncating responses

Hello!

On Tue, Sep 29, 2009 at 12:11:02AM -0400, brianf wrote:

> Just a follow up to some more observed behavior:
>
> It seems that if I set the burst number low enough to trigger 503's with my tests, the system continuously returns 503 until i restart nginx. Is this another known bug?

Current implementation counts requests against rate limit even if
it returns 503. So if you have

limit_req_zone ... rate=1r/s;
limit_req burst=10;

and did 100 requests at a time (90 of which returned 503) - you
have to wait 100 seconds until next request will be allowed.

And yes, it's known issue.

Maxim Dounin

2009年9月28日星期一

header http status code

dear all
? ? ? I want to set the http status code use if directive not the error_page directive!
? ? ? I want to know what header variable ?to get the http status code
? ? ? ?if ($header_something ∼* '404') {
? ? ? ? ? ?return 402;
? ? ? ? }
? ? ? ?what is the?$header_something?




thanks


來看看 Windows Live 的其他功能 除了寄信 Windows Live 還有更多功能 立即升級 Windows Live Hotmail

Re: limit_req truncating responses

Just a follow up to some more observed behavior:

It seems that if I set the burst number low enough to trigger 503's with my tests, the system continuously returns 503 until i restart nginx. Is this another known bug?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,9556,9810#msg-9810

upstream timeouts

I got a quick question.

If I define a couple of servers in an upstream, and one of them fails, does Nginx re-send the request(s) for which a response from that upstream was not received to the other server, or is that request doomed?

Thanks,
-M

Re: Why named shared memory zones

Maxim Dounin wrote:
> Speaking particularly about proxy_cache_path:
>
> There was at least 2 changes in it's arguments since introduction.
> With current syntax it was hardly even noticeable. With proposed
> "short" notation it would likely require each and every config
> rewrite or introduction of another directive (with
> proxy_cache_path left as legacy).
>
Well actually I'd say that re-writes of configurations in general would
be required, with one notable exception, no more than if you use
key/value pairs. If you're adding options, then you don't need to
change the order, and just add the options onto the end. If you drop an
option, then you'd need to change the config anyway. The only real
problem with changes is if an option were dropped, and the ordering of
the options were such that an old configuration could be interpreted as
a new configuration without errors, but with unintended behaviour. I
think this case would be rare, though.
> Maxim Dounin
>
> p.s. I understand that you want to do things better. But
> suggested change is bad. Really.
>
Maybe. I totally understand where you're coming from, and I'm torn
between the two options, but I still feel the shorter syntax is nicer.

Of course the advantage of OS software is that I can go and write a
patch if I want to do things differently.

Thanks Igor and Maxim for taking the time to share your views on this.

Marcus.

Re: Why named shared memory zones

Hello!

On Mon, Sep 28, 2009 at 11:39:49PM +0300, Marcus Clyne wrote:

>
> >I do not think that short syntax is good, suppose:
> >proxy_cache_path /data/nginx/cache 1:2 ONE 10m 20m 50m;
> Sure, I can see why it could be confusing, but that's why I'm
> suggesting both as an option, and leave it up to whoever manages the
> server to decide whether they want the longer or shorter syntax. I
> think there will be others than just myself who would prefer a
> shorter syntax as an option.
>
> You could also write the above as
>
> proxy_cache_path /data/nginx/cache 1:2 ONE 10M 20m 50M
>
> which helps to differentiate between the time and keys zone size, or
> you could have
>
> proxy_cache_path /data/nginx/cache 1:2 ONE:10M 20m 50M
>
> to link the shm name with size, which might make it easier to
> remember the order if you have to.

Using more than two or three positioned arguments are error-prone
and just silly. It's hard to remember their intended order, it's
impossible to change order and/or remove one of them if needed
later for some reason, you may add new arguments to the end only,
there is no good way to provide defaults. In this particular case
it will be impossible to detect some errors in arguments order,
too (as time and size declarations may be identical.

Having two possible notations allowed will make things even worse as
you'll have to a) remember both as administrator to be able to read
configs b) distinguish them reliably in parsing code.

Speaking particularly about proxy_cache_path:

There was at least 2 changes in it's arguments since introduction.
With current syntax it was hardly even noticeable. With proposed
"short" notation it would likely require each and every config
rewrite or introduction of another directive (with
proxy_cache_path left as legacy).

Maxim Dounin

p.s. I understand that you want to do things better. But
suggested change is bad. Really.

Proxy_Cache Locking or Waiting method?

Hi Igor,

I hope you are well. I just wanted to know if you had a time frame as to
when we may start to see backend locking for proxy_cache.

I have an example where nginx will be caching large files, and sometimes
a customer will use an HTTP monitoring service to check if a file works,
and that may generate 40+ hits to the file all at the same time, causing
nginx to proxy and write the file 40 times to the disk.

Is there a way to either:

A) Cause nginx to pause connections for subsequent requests for the same
file, until the file is in cache from the first connection. Obviously
this isn't very clean, as it can cause connections to pile up.

-or-

B) Prevent nginx for writing the duplicate files to the hard disk. So if
nginx sees that the file is not in cache, but is currently being fetched
from the backend, then it can proxy the connection from the backend
still, but just not write the file to the tmp directory, etc. This would
be the ideal method I believe.

This is a huge issue when using SSDs, as there is a major amount of
wasted write cycles, which of course causes other issues like the page
cache filling up with files that will just be deleted, etc. Plus, if you
are proxying a 1GB file, then in the example above, you'll need 40GBs of
space available on the tmp volume, to hold the 40 incoming files, just
to have 39 of them deleted.

Thanks,

John

Re: Why named shared memory zones

> I do not think that short syntax is good, suppose:
> proxy_cache_path /data/nginx/cache 1:2 ONE 10m 20m 50m;
Sure, I can see why it could be confusing, but that's why I'm suggesting
both as an option, and leave it up to whoever manages the server to
decide whether they want the longer or shorter syntax. I think there
will be others than just myself who would prefer a shorter syntax as an
option.

You could also write the above as

proxy_cache_path /data/nginx/cache 1:2 ONE 10M 20m 50M

which helps to differentiate between the time and keys zone size, or you
could have

proxy_cache_path /data/nginx/cache 1:2 ONE:10M 20m 50M

to link the shm name with size, which might make it easier to remember
the order if you have to.


Marcus.

Re: Writing a cache module

On 9/28/09 3:06 PM, "Brian Akins" <Brian.Akins@turner.com> wrote:

> I need to write cache module that's not upstream based. (Long story...)

Actually maybe someone else has some pointers so I may not have to start
from scratch.

Basically, I need to replicate how apache's mod_cache works (but faster of
course - that's why I'm playing with nginx.)

What it does, is the handler runs really early in the response. On a cache
hit, it short circuits the filter stack (skipping gzip, ssi, etc.) and
serves to client.

It stores things in cache after most of the filter stack (after, ssi, gzip,
etc.)

It uses vary. I think in nginx, for my purposes at least, I could just use
configuration to control behavior of the cache key.

Here's a talk from apachecon I did a few years ago to give a hint at what
I'm wanting to do: http://www.akins.org/files/apachecon-cnn-good.pdf

I'm pretty sure I can get equal, and actually better, functionality from
nginx, but just wondering if someone else had attempted it.
--
Brian Akins

Re: Writing a cache module

On Mon, Sep 28, 2009 at 03:06:25PM -0400, Akins, Brian wrote:

> I need to write cache module that's not upstream based. (Long story...)
>
> Reading through the upstream code and the proxy and fastcgi modules, it
> looks fairly simple.
>
> Just so I know I'm on the right track:
>
> - to get something from cache I should generate a key for r->cache, try to
> open file using ngx_http_file_cache_open, then "serve" it using
> ngx_http_cache_send (or replicate what it does).
>
> -to store something in cache, create a ngx_temp_file_t store my data in
> there, generate a key, then call ngx_http_file_cache_update to add that file
> to cache.
>
> Is that reasonable?

Yes.

> Any gotchas?

Do not forget ngx_http_file_cache_free() for noncacheable responses.


--
Igor Sysoev
http://sysoev.ru/en/

Re: Why named shared memory zones

On Mon, Sep 28, 2009 at 07:51:38PM +0300, Marcus Clyne wrote:

> Igor Sysoev wrote:
> > In my opinion
> >
> > proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=ONE:10m
> > inactive=1h max_size=1g;
> >
> > is clearer and more expansible: you just need to add new keywords.
> >
> I see your point, I just personally prefer the shorter syntax, and if
> you are familiar with the syntax, I think it's quicker to read
>
> proxy_cache_path /data/nginx/cache 1:2 ONE 10m 1h 1g;
>
> How about offering both? You could have a syntax that included the
> key/value pairs and one that had the shorter syntax with a specific
> ordering. That would mean that admins could choose the version they prefer.
>
> Just an idea.

I do not think that short syntax is good, suppose:
proxy_cache_path /data/nginx/cache 1:2 ONE 10m 20m 50m;


--
Igor Sysoev
http://sysoev.ru/en/

Writing a cache module

I need to write cache module that's not upstream based. (Long story...)

Reading through the upstream code and the proxy and fastcgi modules, it
looks fairly simple.

Just so I know I'm on the right track:

- to get something from cache I should generate a key for r->cache, try to
open file using ngx_http_file_cache_open, then "serve" it using
ngx_http_cache_send (or replicate what it does).

-to store something in cache, create a ngx_temp_file_t store my data in
there, generate a key, then call ngx_http_file_cache_update to add that file
to cache.

Is that reasonable? Any gotchas?

--
Brian Akins

Re: shared memory zone "media" conflicts with already declared size 0

On Mon, Sep 28, 2009 at 06:15:50PM +0200, Tomasz Pajor wrote:

> what seems to be the problem?
>
> [emerg]: the size 52428800 of shared memory zone "media" conflicts with
> already declared size 0 in /etc/nginx/conf.d/cache.conf:5
> configuration file /etc/nginx/nginx.conf test failed

This may be caused if "proxy_cache media" is included before proxy_cache_path.

> my configuration below
>
> # nginx.conf
> user nobody nogroup;
> worker_processes 5;
> error_log /var/log/nginx/error.log;
> pid /var/run/nginx/nginx.pid;
> worker_rlimit_nofile 8192;
>
> events {
> use epoll;
> worker_connections 4096;
> }
>
> http {
> include base/std;
> include base/log_format;
> include base/gzip;
> include base/proxy_conf_real_ip;
>
> include /etc/nginx/conf.d/*.conf;
> }
>
> # base/std
> include base/mime.types;
> default_type application/octet-stream;
>
> sendfile on;
>
> tcp_nopush on;
> tcp_nodelay off;
>
> # base/log_format
> log_format main '$remote_addr - $remote_user [$time_local] '
> '"$request" $status $body_bytes_sent "$http_referer" '
> '"$http_user_agent" "$http_x_forwarded_for"';
>
> # base/gzip
> gzip on;
> gzip_http_version 1.0;
> gzip_comp_level 2;
> gzip_proxied any;
> gzip_types text/plain text/css application/x-javascript text/xml
> application/xml application/xml+rss text/javascript;
>
> # base/proxy_conf_real_ip
> include base/proxy_conf
> proxy_set_header X-Real-IP $http_x_real_ip;
>
> # base/proxy_conf
> proxy_set_header Host $host;
> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
> client_max_body_size 10m;
> client_body_buffer_size 128k;
> proxy_connect_timeout 90;
> proxy_send_timeout 90;
> proxy_read_timeout 90;
> proxy_buffers 32 4k;
>
> # conf.d/cache.conf
> client_body_temp_path /mnt/client_temp;
> proxy_temp_path /mnt/proxy_temp;
> fastcgi_temp_path /mnt/fastcgi_temp;
>
> proxy_cache_path /mnt/media levels=1:2 keys_zone=media:50m max_size=5000m;
>
> server {
> listen 80;
>
> location / {
> proxy_pass http://media.server:8080;
> include extra/cache;
> }
> }
>
> # conf.d/ssl.conf
> server {
> listen 443;
>
> ssl on;
> ssl_certificate ssl/ssl.crt;
> ssl_certificate_key ssl/ssl.key;
>
> ssl_session_timeout 5m;
> ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
>
> location / {
> proxy_pass https://media.server:4430;
> include extra/cache;
> }
> }
>
> # extra/cache
> proxy_cache media;
> proxy_cache_key $request_uri;
> proxy_cache_valid 200 28d;
> proxy_cache_valid 404 1m;
> proxy_cache_valid 500 2m;
> proxy_cache_use_stale error timeout invalid_header http_500 http_502
> http_503 http_504;
> proxy_ignore_headers Expires Cache-Control;
> FileETag on;
> expires max;

--
Igor Sysoev
http://sysoev.ru/en/

Re: Why named shared memory zones

Igor Sysoev wrote:
> In my opinion
>
> proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=ONE:10m
> inactive=1h max_size=1g;
>
> is clearer and more expansible: you just need to add new keywords.
>
I see your point, I just personally prefer the shorter syntax, and if
you are familiar with the syntax, I think it's quicker to read

proxy_cache_path /data/nginx/cache 1:2 ONE 10m 1h 1g;

How about offering both? You could have a syntax that included the
key/value pairs and one that had the shorter syntax with a specific
ordering. That would mean that admins could choose the version they prefer.

Just an idea.

Thanks,

Marcus.

Re: Why named shared memory zones

On Mon, Sep 28, 2009 at 07:04:27PM +0300, Marcus Clyne wrote:

> What about offering an alternative, clearer syntax? e.g.:
>
> proxy_cache_path /data/nginx/cache1 1:2 ONE 10m;

In my opinion

proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=ONE:10m
inactive=1h max_size=1g;

is clearer and more expansible: you just need to add new keywords.

Furthermore, this week I plan to refactor limit_zone module and
to change syntax of

limit_zone one $binary_remote_addr 10m;
to
limit_conn_zone $binary_remote_addr zone=one:10m;

on the analogy of
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;


--
Igor Sysoev
http://sysoev.ru/en/

shared memory zone "media" conflicts with already declared size 0

what seems to be the problem?

[emerg]: the size 52428800 of shared memory zone "media" conflicts with
already declared size 0 in /etc/nginx/conf.d/cache.conf:5
configuration file /etc/nginx/nginx.conf test failed

my configuration below

# nginx.conf
user nobody nogroup;
worker_processes 5;
error_log /var/log/nginx/error.log;
pid /var/run/nginx/nginx.pid;
worker_rlimit_nofile 8192;

events {
use epoll;
worker_connections 4096;
}

http {
include base/std;
include base/log_format;
include base/gzip;
include base/proxy_conf_real_ip;

include /etc/nginx/conf.d/*.conf;
}

# base/std
include base/mime.types;
default_type application/octet-stream;

sendfile on;

tcp_nopush on;
tcp_nodelay off;

# base/log_format
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

# base/gzip
gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/css application/x-javascript text/xml
application/xml application/xml+rss text/javascript;

# base/proxy_conf_real_ip
include base/proxy_conf
proxy_set_header X-Real-IP $http_x_real_ip;

# base/proxy_conf
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffers 32 4k;

# conf.d/cache.conf
client_body_temp_path /mnt/client_temp;
proxy_temp_path /mnt/proxy_temp;
fastcgi_temp_path /mnt/fastcgi_temp;

proxy_cache_path /mnt/media levels=1:2 keys_zone=media:50m max_size=5000m;

server {
listen 80;

location / {
proxy_pass http://media.server:8080;
include extra/cache;
}
}

# conf.d/ssl.conf
server {
listen 443;

ssl on;
ssl_certificate ssl/ssl.crt;
ssl_certificate_key ssl/ssl.key;

ssl_session_timeout 5m;
ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;

location / {
proxy_pass https://media.server:4430;
include extra/cache;
}
}

# extra/cache
proxy_cache media;
proxy_cache_key $request_uri;
proxy_cache_valid 200 28d;
proxy_cache_valid 404 1m;
proxy_cache_valid 500 2m;
proxy_cache_use_stale error timeout invalid_header http_500 http_502
http_503 http_504;
proxy_ignore_headers Expires Cache-Control;
FileETag on;
expires max;

Re: Why named shared memory zones

Igor Sysoev wrote:
> On Mon, Sep 28, 2009 at 06:25:37PM +0300, Marcus Clyne wrote:
>
>
>> Hi,
>>
>> Igor Sysoev wrote:
>>
>>> On Mon, Sep 28, 2009 at 03:28:44PM +0300, Marcus Clyne wrote:
>>>
>>>
>>>
>>>> Hi,
>>>>
>>>> What's the purpose/benefit of naming shared memory zones in config files?
>>>>
>>>>
>>> Its names are used in other directives. For example, you may have
>>> several proxy_cache's.
>>>
>>>
>>>
>> I understand their use in other directives, but I was just wondering why
>> you actually need them in the directives.
>>
>> For example, if you define several proxy caches, then each one would
>> automatically use a different shared memory section. It seems
>> unnecessary to me to use names in the config file, since they'd always
>> be different, and from what I gather, if you use the same name (and tag)
>> for two shared memory sections, then you'll get a conf error (correct me
>> if I'm wrong, though).
>>
>> I feel that having the config like
>>
>> proxy_cache_path /data/nginx/cache levels=1:2 10m;
>>
>> or even
>>
>> proxy_cache_path /data/nginx/cache 1:2 10m;
>>
>> would be much neater than
>>
>> proxy_cache_path /data/nginx/cache levels=1:2 key_zone=one:10m;
>>
>> And overall wouldn't lose any information that couldn't be generated in
>> the background.
>>
>
> No, suppose the following:
>
> proxy_cache_path /data/nginx/cache1 levels=1:2 keys_zone=ONE:10m;
> proxy_cache_path /data/nginx/cache2 levels=1:2 keys_zone=TWO:10m;
>
> location / {
> proxy_cache ONE;
> }
>
> location /one/ {
> proxy_cache ONE;
> }
>
> location /two/ {
> proxy_cache TWO;
> }
>
>
I see.

What about offering an alternative, clearer syntax? e.g.:

proxy_cache_path /data/nginx/cache1 1:2 ONE 10m;

>>> The second reason is Win32 uses named shared memory mapping.
>>> However, it's lamost impossible to use shared memory in Win32 due to
>>> Vista ASLR.
>>>
>>>
>> If you need to have named sections, it would be easy enough to generate
>> them sequentially (e.g. ngx_shms1, ngx_shms2...) whilst reading the
>> config file.
>>
>
> The zone name is also logged when zone is out of space.
>
Ok, that's useful.

Thanks,

Marcus.