2009年10月31日星期六

Re: cache post-processor

Thanks Maxim.

Does this module work well with Nginx cache?


--------------------------------------------------
From: "Maxim Dounin" <mdounin@mdounin.ru>
Sent: Saturday, October 31, 2009 8:02 PM
To: <nginx@sysoev.ru>
Subject: Re: cache post-processor

> Hello!
>
> On Sat, Oct 31, 2009 at 02:41:03PM +0800, 冉兵 wrote:
>
>> Hi,
>>
>> I'm wondering how this can be done:
>>
>> I'd like to take advantage of the Nginx cache, but some part of the html
>> content is depending on the value of a cookie, for example, the name of
>> the current logged in user. e.g.
>>
>> <html>
>> <body>
>> Hello, $user
>> </body>
>> </html>
>>
>> The html originates from an up stream and is cached.
>>
>> I'm wondering if there is mechanism to allow me to plug in a processor
>> and manipulate the content prior to sending out.
>
> http://wiki.nginx.org/NginxHttpSsiModule
>
> Maxim Dounin
>
>

Re: 10 000 req/s: tpd2 - why it is so fast?

regards!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,12472,18794#msg-18794

From Apache to Nginx+php-fpm : How to port configuration ?

Greetings all.

I successfully installed nginx + php-fpm. I already read numerous tutorials and exemples out there on the web, and I would need some additional insights on how to port the various configuration options I had via .htaccess under Apache.

For instance, CSS files needs to be processed by PHP.
Under Apache, I simply put this in the .htaccess :

SetHandler application/x-httpd-php

1) How can I achieve this under Nginx with php-fpm ?
2) Any way to specify per server PHP configs à-la Apache (ex : php_value memory_limit 64M) ?
3) Weird behavior : if I try to access a PHP file (ex : http://www.mydomain.com/somefile.php), I get a download prompt.

Here is the server configuration for mydomain.
Any tweaks and advices would be appreciated.

server
{
listen :80 default;
server_name mydomain.com www.mydomain.com;
expires 0;

access_log access.log;
error_log error.log;

# Redirect from non-www to www
if ($host = 'mydomain.com')
{
rewrite ^/(.*)$ http://www.mydomain.com/$1 permanent;
}

# Simple static content delivery system (versionning only)
location ~* "^(+)-[0-9]{10}\.(js|gif|jpg|jp?g|png|css|swf)$"
{
rewrite "(.*)/(+)-[0-9]{10}\.(js|gif|jp?g|png|css|swf)$" $1/$2.$3 last;
return 403;
}

location /
{

# set document root and index file
root ;
index index.php index.html index.htm;

# if file exists, set expire and return it
if (-f $request_filename)
{
expires 1y;
break;
}

# if directory exists return it right away
if (-d $request_filename)
{
break;
}

# rewrite everyting else to index.php
if (!-e $request_filename)
{
rewrite ^ /index.php last;
break;
}

}
# if the request starts with our frontcontroller, pass it on to fastcgi
location ~ ^/index.php
{

# set document root
root ;

# fastcgi setup
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;

fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;

fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;

fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;

# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;

}

# Prevent files beginning with . from being viewed
location ~ /\.
{
deny all;
}

}


Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18743,18743#msg-18743

Re: How to use cookie for request/conection limiting

Maxim Dounin Wrote:
-------------------------------------------------------
> > Great, but it's pity I could not find it in
> documentation ( and I was reading the Russian one
> - which is supposed to be most comprehensive).
>
> Well, probably you should try again. If you still
> unable to, here is most
> close links:
>
> http://wiki.nginx.org/NginxHttpCoreModule#.24cooki
> e_COOKIE
> http://sysoev.ru/nginx/docs/http/ngx_http_core_mod
> ule.html#variables
>
Indeed i overlooked it.

It is not clear to me if i any nginx builtin variable is accessible inside any module directive (which uses variables).
I mean there are several phases in HTTP request processing (like I saw here http://catap.ru/blog/2009/05/27/nginx-phases-of-handling-http-request/) - and it's not clear to me at which phase the $cookie_name is generated (or any other variable) and whether it's generated after or before the limit_req_zone/limit_zone directives are processes?

Thanks
Alex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18135,18740#msg-18740

Re: How to use cookie for request/conection limiting

anomalizer Wrote:
-------------------------------------------------------

> >Genuine users of specific application - this why
> I though that session
> >should be most reliable way. The other option is
> to limit by IP but
> >AFAIU this is not good in case several users are
> connecting from behind
> >the same proxy. Could you recommend other
> options?
>
> You need some sort of a way to ensure that the per
> user token (in this
> case session id in a cookie) was actually issued
> by you.

The web application which I need to throttle is a php one. I'm not a php coder and only slightly familiar with php - can I assign a custom algorithm to php session id generation?
Also how can I verify the session id inside nginx? Should I write a special verification code in nginx embedded perl?

> The token
> should have the following properties:
>
> * Computationally inexpensive to check if you had
> issued the token
>
> * Computationally prohibitive for others to create
> a token that will
> pass the test above
>
>
> Failure to produce a legitimate toke by the user
> shoudl result in a HTTP
> 403

Now that I think about cookie based limiting again - it's not clear to me how new client connections will be handled, by
the connection/request limiting modules, before the application assigns them a valid cookie?

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18135,18730#msg-18730

Re: 10 000 req/s: tpd2 - why it is so fast?

What is the max requests that nginx can handle assuming that it is making a reverse proxy call to an application that responds in less than 100 ms?

Lets take hypothetically for example, it has 500 ports to send the requests to..

regards

Asif

On Fri, Oct 9, 2009 at 10:55 AM, Denis Filimonov <denis@filimonov.name> wrote:
That benchmark shows that you can send a "hello world" message over a
localhost connection very fast. This is an interesting result on its own but
has nothing to do with real life workloads of http servers.

Denis.

On Friday 09 October 2009 11:30:19 Dinh Pham wrote:
> Hi all,
>
> I have come across an article whose author claimed that his web server
> can handle 10 000 requests per second. Because I am far from expert on
> high performance web servers so I would like to ask you if his
> benchmarks has any flaw or his web server is too simple to be slow?
>
> I have never thought that a LISP-based implementation can be as fast as
> C-based implementations such as Nginx. Is there any magic here?
>
> You can read his article here http://john.freml.in/teepeedee2-c10k
>
> Thanks
>
> pcdinh
>


Re: How to use cookie for request/conection limiting

On Oct 30, piavlo wrote:
>anomalizer Wrote:
>-------------------------------------------------------
>
>> Are you trying to limit genuine or malicious
>> users? A malicious user can
>> always circumvet the limites by creating his own
>> cookies and sending
>> them.
>
>Genuine users of specific application - this why I though that session
>should be most reliable way. The other option is to limit by IP but
>AFAIU this is not good in case several users are connecting from behind
>the same proxy. Could you recommend other options?

You need some sort of a way to ensure that the per user token (in this
case session id in a cookie) was actually issued by you. The token
should have the following properties:

* Computationally inexpensive to check if you had issued the token

* Computationally prohibitive for others to create a token that will
pass the test above


Failure to produce a legitimate toke by the user shoudl result in a HTTP
403

Re: cache post-processor

Hello!

On Sat, Oct 31, 2009 at 02:41:03PM +0800, 冉兵 wrote:

> Hi,
>
> I'm wondering how this can be done:
>
> I'd like to take advantage of the Nginx cache, but some part of the html content is depending on the value of a cookie, for example, the name of the current logged in user. e.g.
>
> <html>
> <body>
> Hello, $user
> </body>
> </html>
>
> The html originates from an up stream and is cached.
>
> I'm wondering if there is mechanism to allow me to plug in a processor and manipulate the content prior to sending out.

http://wiki.nginx.org/NginxHttpSsiModule

Maxim Dounin

Re: null pointer dereference vulnerability in 0.1.0-0.8.13.

On Friday 30 October 2009 17:32:48 Igor Sysoev wrote:
> On Fri, Oct 30, 2009 at 05:22:41PM +0100, Pior Bastida wrote:
> > On Monday 26 October 2009 19:46:58 Igor Sysoev wrote:
> > > A patch to fix null pointer dereference vulnerability in 0.1.0-0.8.13.
> > > The patch is not required for versions 0.8.15+, 0.7.62+, 0.6.39+,
> > > 0.5.38+.
> >
> > Hello Igor,
> >
> > Can you confirm that it's related to this vulnerability?
> >
> > http://www.securityfocus.com/bid/36839
>
> Yes. However, it's not a buffer overflow as stated there.
> The published exploit causes always a null pointer dereference only
> and you can not execute arbitrary code as stated there.

Thank you !

--
Pior Bastida
pior@pbastida.net

Saving file with upload_module

Hello.

I have a little problem with upload_module. According to
documentation: http://www.grid.net.ru/nginx/upload.en.html i will
quote:

"The content of each uploaded file then could be read from a file
specified by $upload_tmp_path variable or the file could be simply
moved to ultimate destination."

and my question is how to moved, or I would say, save these files to
this ultimate destination. The goal that I want to achieve is simple.
Using upload_module i want to save file to a disk. According to this
sentence above, I thinks is possible, but I don't really now how to do
that. In every single page with some example configuration of this
module, file is forwarded after upload to backend server. I just want
to save this file on hard drive. It's possible?

Best Regards
Grzegorz Sieńko

Re: How to record how much bandwidth i save using gzip?

ok, i found it ^^ sorry, ignore me ^^
http://wiki.nginx.org/NginxHttpLogModule

On Sat, Oct 31, 2009 at 4:13 PM, Kiswono Prayogo <kiswono@gmail.com> wrote:
Hi all, i want to know, how much bandwidth i save using gzip compression, is it available just like apache did:

localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/folder.gif HTTP/1.1" 200 232 mod_deflate: 95 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/folder.gif HTTP/1.1" 304 - mod_deflate: - pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/blank.gif HTTP/1.1" 200 158 mod_deflate: 94 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/image2.gif HTTP/1.1" 200 313 mod_deflate: 95 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/unknown.gif HTTP/1.1" 200 253 mod_deflate: 95 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/unknown.gif HTTP/1.1" 304 - mod_deflate: - pct.
localhost ::1 - - [29/Oct/2009:10:55:19 +0700] "GET / HTTP/1.1" 200 26 mod_deflate: 133 pct.
localhost ::1 - - [29/Oct/2009:10:55:29 +0700] "GET /?name=John HTTP/1.1" 200 30 mod_deflate: 120 pct.

127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/index.html HTTP/1.1" 200 2130 mod_deflate: 14 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/style.css HTTP/1.1" 200 1516 mod_deflate: 20 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/c_name.gif HTTP/1.1" 200 209 mod_deflate: 83 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/m1.jpg HTTP/1.1" 200 575 mod_deflate: 86 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/m2.jpg HTTP/1.1" 200 393 mod_deflate: 77 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/m3.jpg HTTP/1.1" 200 753 mod_deflate: 86 pct.

Regards,
Kiswono
GB

How to record how much bandwidth i save using gzip?

Hi all, i want to know, how much bandwidth i save using gzip compression, is it available just like apache did:

localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/folder.gif HTTP/1.1" 200 232 mod_deflate: 95 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/folder.gif HTTP/1.1" 304 - mod_deflate: - pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/blank.gif HTTP/1.1" 200 158 mod_deflate: 94 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/image2.gif HTTP/1.1" 200 313 mod_deflate: 95 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/unknown.gif HTTP/1.1" 200 253 mod_deflate: 95 pct.
localhost ::1 - - [29/Oct/2009:10:54:43 +0700] "GET /icons/unknown.gif HTTP/1.1" 304 - mod_deflate: - pct.
localhost ::1 - - [29/Oct/2009:10:55:19 +0700] "GET / HTTP/1.1" 200 26 mod_deflate: 133 pct.
localhost ::1 - - [29/Oct/2009:10:55:29 +0700] "GET /?name=John HTTP/1.1" 200 30 mod_deflate: 120 pct.

127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/index.html HTTP/1.1" 200 2130 mod_deflate: 14 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/style.css HTTP/1.1" 200 1516 mod_deflate: 20 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/c_name.gif HTTP/1.1" 200 209 mod_deflate: 83 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/m1.jpg HTTP/1.1" 200 575 mod_deflate: 86 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/m2.jpg HTTP/1.1" 200 393 mod_deflate: 77 pct.
127.0.0.1 ::1 - - [20/Oct/2009:12:47:00 +0700] "GET /aplin-homework/i/m3.jpg HTTP/1.1" 200 753 mod_deflate: 86 pct.

Regards,
Kiswono
GB

Re: Does nginx support authentification like LDAP or mysql?

Your looking for something like
http://wiki.nginx.org/NginxXSendfile

Just have your php script do the auth and have it send the X-Accel-
Redirect header and nginx will serve the file.

Rob

Sent from my iPhone

On Oct 31, 2009, at 1:07 AM, "partysoft" <nginx-forum@nginx.us> wrote:

> I was wondering if i could implement some type of authentification
> to disallow the downloading of certain files....
>
> What i'm trying to do is implement a downloading system based on
> user accounts...something like Rapidshare..but just the basic : ask
> for the user pass before downloading...
> it would be nice if i could implement this through a PHP or page
> but i don't want to serve with PHP ..through headers, because then
> nginx won't take over the serving of the files, and resume won't be
> available, of course the performance will be affeceted very badly
>
> Thank you for your replies
>
> Cris
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18552,18552#msg-18552
>
>

2009年10月30日星期五

cache post-processor

Hi,
 
I'm wondering how this can be done:
 
I'd like to take advantage of the Nginx cache, but some part of the html content is depending on the value of a cookie, for example, the name of the current logged in user. e.g.
 
<html>
<body>
  Hello, $user
</body>
</html>
 
The html originates from an up stream and is cached.
 
I'm wondering if there is mechanism to allow me to plug  in a processor and manipulate the content prior to sending out.
 
Thanks!
 
Bing
 

Does nginx support authentification like LDAP or mysql?

I was wondering if i could implement some type of authentification to disallow the downloading of certain files....

What i'm trying to do is implement a downloading system based on user accounts...something like Rapidshare..but just the basic : ask for the user pass before downloading...
it would be nice if i could implement this through a PHP or page
but i don't want to serve with PHP ..through headers, because then nginx won't take over the serving of the files, and resume won't be available, of course the performance will be affeceted very badly

Thank you for your replies

Cris

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18552,18552#msg-18552

Re: compiling nginx-0.8.21 fail under linux-2.4.18

how to show it ?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,17810,18546#msg-18546

Limit Zone and Memcache

Hi,

I am looking for someone to modify the limit zone module to store the information in a memcached server instead of in local shared memory so that limit zone can work across multiple servers.

Payment available, please post bids here or email me at martin@evilgeniusmedia.org

I am currently travelling, so responses to emails/posts may be delayed a bit, sorry if this is inconvenient.

Sincerely,
Martin Fjordvald

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18531,18531#msg-18531

Re: How to use cookie for request/conection limiting

Hello!

On Fri, Oct 30, 2009 at 06:24:00PM -0400, piavlo wrote:

> Igor Sysoev Wrote:
> -------------------------------------------------------
> > You may use $cookie_somename since 0.7.22 and
> > 0.6.36.
>
> Great, but it's pity I could not find it in documentation ( and I was reading the Russian one - which is supposed to be most comprehensive).

Well, probably you should try again. If you still unable to, here is most
close links:

http://wiki.nginx.org/NginxHttpCoreModule#.24cookie_COOKIE
http://sysoev.ru/nginx/docs/http/ngx_http_core_module.html#variables

Maxim Dounin

Re: How to use cookie for request/conection limiting

Igor Sysoev Wrote:
-------------------------------------------------------
> You may use $cookie_somename since 0.7.22 and
> 0.6.36.

Great, but it's pity I could not find it in documentation ( and I was reading the Russian one - which is supposed to be most comprehensive).

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18135,18475#msg-18475

Re: How to use cookie for request/conection limiting

anomalizer Wrote:
-------------------------------------------------------

> Are you trying to limit genuine or malicious
> users? A malicious user can
> always circumvet the limites by creating his own
> cookies and sending
> them.

Genuine users of specific application - this why I though that session should be most reliable way. The other option
is to limit by IP but AFAIU this is not good in case several users are connecting from behind the same proxy.
Could you recommend other options?

Thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18135,18474#msg-18474

Re: How to use cookie for request/conection limiting

On Oct 29, piavlo wrote:
>Hi,
>I'd like to limit connections and/or request based on cookies
>
>Is it possible to do it with something like this:
>
>limit_req_zone $cookie_somename zone=one:10m rate=1r/s;
>
>?
>
>The only thing I've found is http://hg.mperillo.ath.cx/nginx/mod_parsed_vars/file/70df16b39e79/README
>but this module has not been updated for 2 years.

Are you trying to limit genuine or malicious users? A malicious user can
always circumvet the limites by creating his own cookies and sending
them.

Re: null pointer dereference vulnerability in 0.1.0-0.8.13.

On Fri, Oct 30, 2009 at 05:22:41PM +0100, Pior Bastida wrote:

> On Monday 26 October 2009 19:46:58 Igor Sysoev wrote:
> > A patch to fix null pointer dereference vulnerability in 0.1.0-0.8.13.
> > The patch is not required for versions 0.8.15+, 0.7.62+, 0.6.39+, 0.5.38+.
>
> Hello Igor,
>
> Can you confirm that it's related to this vulnerability?
>
> http://www.securityfocus.com/bid/36839

Yes. However, it's not a buffer overflow as stated there.
The published exploit causes always a null pointer dereference only
and you can not execute arbitrary code as stated there.


--
Igor Sysoev
http://sysoev.ru/en/

Re: null pointer dereference vulnerability in 0.1.0-0.8.13.

On Monday 26 October 2009 19:46:58 Igor Sysoev wrote:
> A patch to fix null pointer dereference vulnerability in 0.1.0-0.8.13.
> The patch is not required for versions 0.8.15+, 0.7.62+, 0.6.39+, 0.5.38+.

Hello Igor,

Can you confirm that it's related to this vulnerability?

http://www.securityfocus.com/bid/36839

Thanks !

--
Pior Bastida
pior@pbastida.net

Re: 10 000 req/s: tpd2 - why it is so fast?

On 10/30/09 10:42 AM, "Igor Sysoev" <is@rambler-co.ru> wrote:

> It would be interesting.
> What is type of load : dynamic, static or both ?

Both and some proxying thrown in there as well. We use fastcgi and/or proxy
to an app server for "really" dynamic stuff.

> Have you tried prefork MPM for the load ?

Yes, we had memory issues. Also, it made some of the "custom" stuff we do is
more suited to a "few" process with "many" threads.

Obviously, I'm very interesting in nginx :) I've been impressed so far.

--
Brian Akins

Re: memcache key suggestion

Looks like one. Maybe you can try encoding the % there. Try replacing
your "%" with "%25"

On Thu, Oct 29, 2009 at 10:16 PM, brianmercer <nginx-forum@nginx.us> wrote:
> I would like to use Nginx to retrieve pages placed in a memcached bin by Drupal caching.
>
> The Drupal key appears to be:
>
>  cache_page-http%3A%2F%2Fexample.com%2Fcontent%2Fquadrum-utinam
>
> Using
>
>    ...
>    set   $memcached_key   cache_page-$scheme://$host$uri;
>    memcached_pass   127.0.0.1:11211;
>    ...
>
> does not produce a match.  I get:
>
>  key: "cache_page-/content/quadrum-utinam" was not found by memcached while reading response header from upstream ...
>
> Is that an encoding problem?
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18166,18166#msg-18166
>
>
>

Re: 10 000 req/s: tpd2 - why it is so fast?

On Fri, Oct 30, 2009 at 10:29:01AM -0400, Akins, Brian wrote:

> On 10/30/09 9:29 AM, "Igor Sysoev" <is@rambler-co.ru> wrote:
>
> > Yes, Apache2 should be able to handle c10k using threads.
> > BTW, is the ton of RAM a virtual memory (for threads stacks) or physical one ?
>
> Both. I can get some numbers when I get back into office.

It would be interesting.
What is type of load : dynamic, static or both ?
Have you tried prefork MPM for the load ?


--
Igor Sysoev
http://sysoev.ru/en/

Re: 10 000 req/s: tpd2 - why it is so fast?

On 10/30/09 9:29 AM, "Igor Sysoev" <is@rambler-co.ru> wrote:

> Yes, Apache2 should be able to handle c10k using threads.
> BTW, is the ton of RAM a virtual memory (for threads stacks) or physical one ?

Both. I can get some numbers when I get back into office.

--
Brian Akins

Re: 10 000 req/s: tpd2 - why it is so fast?

On Fri, Oct 30, 2009 at 09:16:46AM -0400, Akins, Brian wrote:

> On 10/30/09 2:53 AM, "Igor Sysoev" <is@rambler-co.ru> wrote:
> > I believe only varnish and nginx in this set are ever
> > able to keep C10K.
>
> FWIW, in the interest of not spreading FUD - apache2 with the worker MPM can
> comfortable handle 10k+ simultaneous connections. It takes a ton of RAM,
> but I see it done everyday.

Yes, Apache2 should be able to handle c10k using threads.
BTW, is the ton of RAM a virtual memory (for threads stacks) or physical one ?


--
Igor Sysoev
http://sysoev.ru/en/

Re: 10 000 req/s: tpd2 - why it is so fast?

On 10/30/09 2:53 AM, "Igor Sysoev" <is@rambler-co.ru> wrote:
> I believe only varnish and nginx in this set are ever
> able to keep C10K.

FWIW, in the interest of not spreading FUD - apache2 with the worker MPM can
comfortable handle 10k+ simultaneous connections. It takes a ton of RAM,
but I see it done everyday.

--
Brian Akins

Re: Reverse Proxy with caching

On Fri, Oct 30, 2009 at 06:59:32AM -0400, infestdead wrote:

> Igor Sysoev Wrote:
> -------------------------------------------------------
> > On Thu, Oct 15, 2009 at 03:17:20PM +0200, Smrchy
> > wrote:
> >
> > > Hi,
> > >
> > > i setup nginx and it's working great for us for
> > static files. I'm wondering
> > > if nginx would be suited for the following
> > scenario:
> > >
> > > We have an application server that has a lot of
> > .php files and even more
> > > static files (.gif, .jpg etc).
> > >
> > > Can i put nginx in front of if like a proxy but
> > have nginx cache the static
> > > files (everything except the .php stuff). So
> > that only .php requests reach
> > > the server and the static files will be cached
> > on the nginx machine once the
> > > get retrieved from the upstream server.
> >
> > First, you may separate static files from PHP:
> >
> > server {
> >
> > location ~ \.(gif|jpg|png)$ {
> > root /data/www;
> > }
> >
> > location ~ \.php$ {
> > proxy_pass http://backend;
> > }
> >
> > However, you have to retrieve the files by
> > yourself.
> >
> > Second, you may use mirror on demand:
> >
> > server {
> >
> > location ~ \.(gif|jpg|png)$ {
> > root /data/www;
> > error_page 404 = @fetch;
> > }
> >
> > location @fetch {
> > internal;
> >
> > proxy_pass http://backend;
> > proxy_store on;
> > proxy_store_access user:rw group:rw
> > all:r;
> > proxy_temp_path /data/temp;
> >
> > root /data/www;
> > }
> >
> > location ~ \.php$ {
> > proxy_pass http://backend;
> > }
> >
>
> When I try using proxy_pass within location ~ something I get :
> : "proxy_pass" may not have URI part in location given by regular expression, or inside named location, or inside the "if" statement, or inside the "limit_except" block in /etc/nginx/nginx.conf:52
>
> the line is proxy_pass http://127.0.0.1:8080/;
>
> Do you have an idea what the problem might be?

- proxy_pass http://127.0.0.1:8080/;
+ proxy_pass http://127.0.0.1:8080;


--
Igor Sysoev
http://sysoev.ru/en/

Re: Reverse Proxy with caching

infestdead Wrote:
-------------------------------------------------------
> Igor Sysoev Wrote:
> --------------------------------------------------
> >
> > location ~ \.php$ {
> > proxy_pass http://backend;
> > }
> >
>
> When I try using proxy_pass within location ~
> something I get :
> : "proxy_pass" may not have URI part in location
> given by regular expression, or inside named
> location, or inside the "if" statement, or inside
> the "limit_except" block in
> /etc/nginx/nginx.conf:52
>
> the line is proxy_pass http://127.0.0.1:8080/;
>
> Do you have an idea what the problem might be?
>
> Thanks,
> Ivo
>

Hm I found out what the problem was,
You cant have proxy_pass http://ip:port if you have a regexp in the location directive, if you do - you need to define upstream first :
upstream php_server {
server http://ip:port/;
}
and then
proxy_pass http://php_server;

That did it for me.
Cheers,
Ivo

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,13955,18230#msg-18230

Re: Reverse Proxy with caching

Igor Sysoev Wrote:
-------------------------------------------------------
> On Thu, Oct 15, 2009 at 03:17:20PM +0200, Smrchy
> wrote:
>
> > Hi,
> >
> > i setup nginx and it's working great for us for
> static files. I'm wondering
> > if nginx would be suited for the following
> scenario:
> >
> > We have an application server that has a lot of
> .php files and even more
> > static files (.gif, .jpg etc).
> >
> > Can i put nginx in front of if like a proxy but
> have nginx cache the static
> > files (everything except the .php stuff). So
> that only .php requests reach
> > the server and the static files will be cached
> on the nginx machine once the
> > get retrieved from the upstream server.
>
> First, you may separate static files from PHP:
>
> server {
>
> location ~ \.(gif|jpg|png)$ {
> root /data/www;
> }
>
> location ~ \.php$ {
> proxy_pass http://backend;
> }
>
> However, you have to retrieve the files by
> yourself.
>
> Second, you may use mirror on demand:
>
> server {
>
> location ~ \.(gif|jpg|png)$ {
> root /data/www;
> error_page 404 = @fetch;
> }
>
> location @fetch {
> internal;
>
> proxy_pass http://backend;
> proxy_store on;
> proxy_store_access user:rw group:rw
> all:r;
> proxy_temp_path /data/temp;
>
> root /data/www;
> }
>
> location ~ \.php$ {
> proxy_pass http://backend;
> }
>

When I try using proxy_pass within location ~ something I get :
: "proxy_pass" may not have URI part in location given by regular expression, or inside named location, or inside the "if" statement, or inside the "limit_except" block in /etc/nginx/nginx.conf:52

the line is proxy_pass http://127.0.0.1:8080/;

Do you have an idea what the problem might be?

Thanks,
Ivo

> And finally, you may use proxy cache:
>
> proxy_cache_path /data/nginx/cache levels=1:2
> keys_zone=STATIC:10m
> inactive=24h
> max_size=1g;
> server {
>
> location ~ \.(gif|jpg|png)$ {
> proxy_pass http://backend;
> proyx_cache STATIC;
> proyx_cache_valid 200 1d;
> proxy_cache_use_stale error timeout
> invalid_header updating
> http_500 http_502
> http_503 http_504;
> }
>
> location ~ \.php$ {
> proxy_pass http://backend;
> }
>
>
> --
> Igor Sysoev
> http://sysoev.ru/en/

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,13955,18227#msg-18227

2009年10月29日星期四

Re: 10 000 req/s: tpd2 - why it is so fast?

On Thu, Oct 29, 2009 at 01:38:24PM +0300, Maxim Dounin wrote:

> Hello!
>
> On Thu, Oct 29, 2009 at 09:50:25AM +0300, Igor Sysoev wrote:
>
> > On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev Blut wrote:
> >
> > > Hello,
> > >
> > > On 10/10/2009 01:42 AM, Igor Sysoev wrote:
> > > > On Fri, Oct 09, 2009 at 08:26:32PM +0400, Igor Sysoev wrote:
> > > >
> > > >> I have got these results via localhost:
> > > >>
> > > >> ab -n 30000 -c 10 ~8200 r/s
> > > >> ab -n 30000 -c 10 -k ~20000 r/s
> > > >>
> > > >> This means that this microbenchmark tests mostly TCP connection
> > > >> establishment via localhost: keepalive is 2.4 faster.
> > > >
> > > > BTW, using embedded perl:
> > > >
> > > > server {
> > > > listen 8010;
> > > > access_log off;
> > > >
> > > > location = /test {
> > > > perl 'sub {
> > > > my $r = shift;
> > > > $r->send_http_header("text/html");
> > > > $r->print("<h1>Hello ", $r->variable("arg_name"), "</h1>");
> > > > return OK;
> > > > }';
> > > > }
> > > > }
> > > >
> > > > "ab -n 30000 -c 10 -k" has got ~7800 r/s.
> > >
> > > In case you are curious, John has posted an update
> > > comparing teepeedee2 vs the above perl module on his laptop.
> > > Here is the link:
> > >
> > > http://john.freml.in/teepeedee2-vs-nginx
> >
> > For some reason, he ran "ab -c1" instead of "ab -c10", while nginx may
> > run perl in 2 workers on Core2 Duo (if worker_processes are 2). I believe,
> > it will twice the benchmark result. Second, he still mosty tests TCP
> > connection establishment via localhost instead of server speed. Why
> > he can not run the benchmark with keepalive ?
>
> Well, it's the "useless benchmarks about nothing" game as
> presented by Alex Kapranoff on last Highload++ conference. It's
> not about server speed, it's about multiple useless numbers and
> fun. Key thing is to keep benchmarks as equal as possible, so
> using keepalive here is no-option as he didn't on previous
> benchmarks.
>
> Using "-c1" instead of "-c10" (as used in original post) looks
> like a bug which rendered new results completely irrelevant. So
> nothing to talk about.

BTW the benchmark is really strange, first he mentions C10K problem (10,000
simultaneous connections), but then talks about 10,000 requests record
per seconds via just 10 simultaneous connections. This is very, very
different thing. I believe only varnish and nginx in this set are ever
able to keep C10K. As to varnish, I do not understand what it does in
the benchmark at all. As I understand varnish is only a caching proxy
server and it can not generate dynamic responses (expect error pages).


--
Igor Sysoev
http://sysoev.ru/en/

Re: How to use cookie for request/conection limiting

On Thu, Oct 29, 2009 at 08:20:57PM -0400, piavlo wrote:

> Hi,
> I'd like to limit connections and/or request based on cookies
>
> Is it possible to do it with something like this:
>
> limit_req_zone $cookie_somename zone=one:10m rate=1r/s;
>
> ?
>
> The only thing I've found is http://hg.mperillo.ath.cx/nginx/mod_parsed_vars/file/70df16b39e79/README
> but this module has not been updated for 2 years.

You may use $cookie_somename since 0.7.22 and 0.6.36.


--
Igor Sysoev
http://sysoev.ru/en/

Re: compiling nginx-0.8.21 fail under linux-2.4.18

On Thu, Oct 29, 2009 at 09:05:15PM -0400, testking wrote:

> make is ok , but when i start nginx , "Segmentation fault"

Could you show backtrace of core file ?


--
Igor Sysoev
http://sysoev.ru/en/

Re: memcache key suggestion

> key: "cache_page-/content/quadrum-utinam" was not found by memcached
> while reading response header from upstream ...
>
> Is that an encoding problem?

Run "memcached -vv" and you'll see what keys Drupal and nginx are using.

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >

memcache key suggestion

I would like to use Nginx to retrieve pages placed in a memcached bin by Drupal caching.

The Drupal key appears to be:

cache_page-http%3A%2F%2Fexample.com%2Fcontent%2Fquadrum-utinam

Using

...
set $memcached_key cache_page-$scheme://$host$uri;
memcached_pass 127.0.0.1:11211;
...

does not produce a match. I get:

key: "cache_page-/content/quadrum-utinam" was not found by memcached while reading response header from upstream ...

Is that an encoding problem?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18166,18166#msg-18166

Re: compiling nginx-0.8.21 fail under linux-2.4.18

make is ok , but when i start nginx , "Segmentation fault"

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,17810,18140#msg-18140

How to use cookie for request/conection limiting

Hi,
I'd like to limit connections and/or request based on cookies

Is it possible to do it with something like this:

limit_req_zone $cookie_somename zone=one:10m rate=1r/s;

?

The only thing I've found is http://hg.mperillo.ath.cx/nginx/mod_parsed_vars/file/70df16b39e79/README
but this module has not been updated for 2 years.

Thanks
Alex

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,18135,18135#msg-18135

Re: FRiCKLE Labs & MegiTeam pres. ngx_supervisord

This is exciting.  Thanks for contributing it.

Roger

2009/10/28 Piotr Sikora <piotr.sikora@frickle.com>
Hello,
I'm proud to present a module, already mentioned on the list a few days ago by Grzegorz Nosek, that provides nginx with API to communicate with supervisord.

Initial release adds the ability to START and STOP backends (or any programs) on demand. If supervisord's [program:backend0] entry is configured with "startsecs" parameter, then supervisord (and ngx_supervisord) will wait that time before returning successful or failed status. This is of course done in asynchronous way, so it doesn't halt nginx for a moment.

This simple feature, combined with load-aware load balancers (like Grzegorz Nosek's nginx-upstream-fair) can offer very powerful features (starting first backend when the first request arrives, starting/stopping backends on demand, depending on the load, etc, etc). Patch for nginx-upstream-fair is included in the release and it shows how easy all of this can be achieved.

Current version allows only one module to "register" its monitors with ngx_supervisord, but as soon as there will be need for more (read: other modules will start using ngx_supervisord), this will be changed without any changes in the API.

If something is unclear, just ask.


This module was fully funded by megiteam.pl

For more information, API specification & download please visit:
http://labs.frickle.com/nginx_ngx_supervisord/


Best regards,
Piotr Sikora < piotr.sikora@frickle.com >



Re: 10 000 req/s: tpd2 - why it is so fast?

On Thu, Oct 29, 2009 at 01:38:24PM +0300, Maxim Dounin wrote:

> Hello!
>
> On Thu, Oct 29, 2009 at 09:50:25AM +0300, Igor Sysoev wrote:
>
> > On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev Blut wrote:
> >
> > > Hello,
> > >
> > > On 10/10/2009 01:42 AM, Igor Sysoev wrote:
> > > > On Fri, Oct 09, 2009 at 08:26:32PM +0400, Igor Sysoev wrote:
> > > >
> > > >> I have got these results via localhost:
> > > >>
> > > >> ab -n 30000 -c 10 ~8200 r/s
> > > >> ab -n 30000 -c 10 -k ~20000 r/s
> > > >>
> > > >> This means that this microbenchmark tests mostly TCP connection
> > > >> establishment via localhost: keepalive is 2.4 faster.
> > > >
> > > > BTW, using embedded perl:
> > > >
> > > > server {
> > > > listen 8010;
> > > > access_log off;
> > > >
> > > > location = /test {
> > > > perl 'sub {
> > > > my $r = shift;
> > > > $r->send_http_header("text/html");
> > > > $r->print("<h1>Hello ", $r->variable("arg_name"), "</h1>");
> > > > return OK;
> > > > }';
> > > > }
> > > > }
> > > >
> > > > "ab -n 30000 -c 10 -k" has got ~7800 r/s.
> > >
> > > In case you are curious, John has posted an update
> > > comparing teepeedee2 vs the above perl module on his laptop.
> > > Here is the link:
> > >
> > > http://john.freml.in/teepeedee2-vs-nginx
> >
> > For some reason, he ran "ab -c1" instead of "ab -c10", while nginx may
> > run perl in 2 workers on Core2 Duo (if worker_processes are 2). I believe,
> > it will twice the benchmark result. Second, he still mosty tests TCP
> > connection establishment via localhost instead of server speed. Why
> > he can not run the benchmark with keepalive ?
>
> Well, it's the "useless benchmarks about nothing" game as
> presented by Alex Kapranoff on last Highload++ conference. It's
> not about server speed, it's about multiple useless numbers and
> fun. Key thing is to keep benchmarks as equal as possible, so
> using keepalive here is no-option as he didn't on previous
> benchmarks.

I meant, why can not he re-run the whole benchmark with keepalive.

> Using "-c1" instead of "-c10" (as used in original post) looks
> like a bug which rendered new results completely irrelevant. So
> nothing to talk about.
>
> Maxim Dounin


--
Igor Sysoev
http://sysoev.ru/en/

Re: 10 000 req/s: tpd2 - why it is so fast?

Hello!

On Thu, Oct 29, 2009 at 09:50:25AM +0300, Igor Sysoev wrote:

> On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev Blut wrote:
>
> > Hello,
> >
> > On 10/10/2009 01:42 AM, Igor Sysoev wrote:
> > > On Fri, Oct 09, 2009 at 08:26:32PM +0400, Igor Sysoev wrote:
> > >
> > >> I have got these results via localhost:
> > >>
> > >> ab -n 30000 -c 10 ~8200 r/s
> > >> ab -n 30000 -c 10 -k ~20000 r/s
> > >>
> > >> This means that this microbenchmark tests mostly TCP connection
> > >> establishment via localhost: keepalive is 2.4 faster.
> > >
> > > BTW, using embedded perl:
> > >
> > > server {
> > > listen 8010;
> > > access_log off;
> > >
> > > location = /test {
> > > perl 'sub {
> > > my $r = shift;
> > > $r->send_http_header("text/html");
> > > $r->print("<h1>Hello ", $r->variable("arg_name"), "</h1>");
> > > return OK;
> > > }';
> > > }
> > > }
> > >
> > > "ab -n 30000 -c 10 -k" has got ~7800 r/s.
> >
> > In case you are curious, John has posted an update
> > comparing teepeedee2 vs the above perl module on his laptop.
> > Here is the link:
> >
> > http://john.freml.in/teepeedee2-vs-nginx
>
> For some reason, he ran "ab -c1" instead of "ab -c10", while nginx may
> run perl in 2 workers on Core2 Duo (if worker_processes are 2). I believe,
> it will twice the benchmark result. Second, he still mosty tests TCP
> connection establishment via localhost instead of server speed. Why
> he can not run the benchmark with keepalive ?

Well, it's the "useless benchmarks about nothing" game as
presented by Alex Kapranoff on last Highload++ conference. It's
not about server speed, it's about multiple useless numbers and
fun. Key thing is to keep benchmarks as equal as possible, so
using keepalive here is no-option as he didn't on previous
benchmarks.

Using "-c1" instead of "-c10" (as used in original post) looks
like a bug which rendered new results completely irrelevant. So
nothing to talk about.

Maxim Dounin

Re: Nginx benchmark result share ^^

Kiswono Prayogo wrote:
this benchmark show that v8cgi and php quite the same except php faster on string concatenation (because javascript using ambigous "+" operator), and v8cgi faster on variable indexing (or so i guess because v8 developer said so..)
Interesting. Since the common way to do string concattenation in JS is by joining arrays, I wonder what sort of performance each has doing long array collapsing.
############### WEB SERVER + COMPRESSION + INTERPRETER

and the benchmark using web servers, i don't know if it's fair configuration:
Maybe it would be better to follow the nginx guide to setting up PHP-FastCGI and running the benchmarks again (essentially PHP against V8CGI rather than Apache+PHP vs. Nginx+V8CGI).
############### my spawn fcgi configration (1024 child):

V8C_SCRIPT="/usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -u www-data -g www-data -F 1024 `which v8cgi` $ESP_SCRIPT"
Again, an Nginx + PHP-FastCGI vs. Nginx + V8CGI test means you can use the same number of child processes, so the comparison will be more fair.

--

Phillip B Oldham
ActivityHQ
phill@activityhq.com


Policies

This e-mail and its attachments are intended for the above named recipient(s) only and may be confidential. If they have come to you in error, please reply to this e-mail and highlight the error. No action should be taken regarding content, nor must you copy or show them to anyone.

This e-mail has been created in the knowledge that Internet e-mail is not a 100% secure communications medium, and we have taken steps to ensure that this e-mail and attachments are free from any virus. We must advise that in keeping with good computing practice the recipient should ensure they are completely virus free, and that you understand and observe the lack of security when e-mailing us.


Re: Issue with VirtualHost definition order and SNI SSL

On Thu, Oct 29, 2009 at 10:31:21AM +0200, Iantcho Vassilev wrote:

> yes(on the same port)...and it was working for nearly 2 years..

Did these hosts work in MSIE 6.0 ?

> 2009/10/29 Igor Sysoev <is@rambler-co.ru>
>
> > On Thu, Oct 29, 2009 at 09:35:54AM +0200, Iantcho Vassilev wrote:
> >
> > > Thanks for the info.
> > > I checked the browser TLS is enabled.
> > > Is there a special way to enable it on the server??
> >
> > http://wiki.nginx.org/NginxHttpSslModule#ssl_protocols
> >
> > > It is very strange for me because before Nginx i was using litespeed and
> > > there every SSL host was listening on 443 and everything worked..how do
> > they
> > > do it i don`t know..??
> >
> > I do not know whether litespeed supports SNI.
> > All these hosts are listen on single IP ?
> >
> > > 2009/10/29 Igor Sysoev <is@rambler-co.ru>
> > >
> > > > On Wed, Oct 28, 2009 at 11:59:44PM +0200, Iantcho Vassilev wrote:
> > > >
> > > > > Here is the debug on the host when only one site listens to 443
> > > > >
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 http check ssl handshake
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 https ssl handshake: 0x16
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: -1
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_get_error: 2
> > > >
> > > > SNI handshake looks like this:
> > > >
> > > > 2009/10/29 09:53:05 [debug] 73997#0: *1 http check ssl handshake
> > > > 2009/10/29 09:53:05 [debug] 73997#0: *1 https ssl handshake: 0x16
> > > > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL server name: "
> > www.example.com"
> > > > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_do_handshake: -1
> > > > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_get_error: 2
> > > >
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 post event
> > 0000000001DD95A0
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 delete posted event
> > > > > 0000000001DD95A0
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL handshake handler: 0
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: 1
> > > > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL: SSLv3, cipher:
> > > > > "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1"
> > > >
> > > > For some reason only SSLv3 has been negotiated.
> > > > Either server has no enabled TLSv1 in ssl_protocols, or browser.
> >
> >
> > --
> > Igor Sysoev
> > http://sysoev.ru/en/
> >
> >

--
Igor Sysoev
http://sysoev.ru/en/

Re: Issue with VirtualHost definition order and SNI SSL


yes(on the same port)...and it was working for nearly 2 years..

2009/10/29 Igor Sysoev <is@rambler-co.ru>
On Thu, Oct 29, 2009 at 09:35:54AM +0200, Iantcho Vassilev wrote:

> Thanks for the info.
> I checked the browser  TLS is enabled.
> Is there a special way to enable it on the server??

http://wiki.nginx.org/NginxHttpSslModule#ssl_protocols

> It is very strange for me because before Nginx i was using litespeed and
> there every SSL host was listening on 443 and everything worked..how do they
> do it i don`t know..??

I do not know whether litespeed supports SNI.
All these hosts are listen on single IP ?

> 2009/10/29 Igor Sysoev <is@rambler-co.ru>
>
> > On Wed, Oct 28, 2009 at 11:59:44PM +0200, Iantcho Vassilev wrote:
> >
> > > Here is the debug on the host when only one site listens to 443
> > >
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 http check ssl handshake
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 https ssl handshake: 0x16
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: -1
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_get_error: 2
> >
> > SNI handshake looks like this:
> >
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 http check ssl handshake
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 https ssl handshake: 0x16
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL server name: "www.example.com"
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_do_handshake: -1
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_get_error: 2
> >
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 post event 0000000001DD95A0
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 delete posted event
> > > 0000000001DD95A0
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL handshake handler: 0
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: 1
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL: SSLv3, cipher:
> > > "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1"
> >
> > For some reason only SSLv3 has been negotiated.
> > Either server has no enabled TLSv1 in ssl_protocols, or browser.


--
Igor Sysoev
http://sysoev.ru/en/


Re: non-http upstream server

JoeL at 2009-10-29 14:59 wrote:
> Hi,
>
> Is there any way that I can develop a nginx module that talk to a non-http upstream server? These can be remote FTP servers or database server. Any suggestion that this can be done without affecting the efficiency of the main event loop? I do not want to block on the request to the non-http upstream server.
>
See the memcache module as a example:
http://wiki.nginx.org/NginxHttpMemcachedModule

--
Weibin Yao

Re: Issue with VirtualHost definition order and SNI SSL

On Thu, Oct 29, 2009 at 09:35:54AM +0200, Iantcho Vassilev wrote:

> Thanks for the info.
> I checked the browser TLS is enabled.
> Is there a special way to enable it on the server??

http://wiki.nginx.org/NginxHttpSslModule#ssl_protocols

> It is very strange for me because before Nginx i was using litespeed and
> there every SSL host was listening on 443 and everything worked..how do they
> do it i don`t know..??

I do not know whether litespeed supports SNI.
All these hosts are listen on single IP ?

> 2009/10/29 Igor Sysoev <is@rambler-co.ru>
>
> > On Wed, Oct 28, 2009 at 11:59:44PM +0200, Iantcho Vassilev wrote:
> >
> > > Here is the debug on the host when only one site listens to 443
> > >
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 http check ssl handshake
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 https ssl handshake: 0x16
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: -1
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_get_error: 2
> >
> > SNI handshake looks like this:
> >
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 http check ssl handshake
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 https ssl handshake: 0x16
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL server name: "www.example.com"
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_do_handshake: -1
> > 2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_get_error: 2
> >
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 post event 0000000001DD95A0
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 delete posted event
> > > 0000000001DD95A0
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL handshake handler: 0
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: 1
> > > 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL: SSLv3, cipher:
> > > "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1"
> >
> > For some reason only SSLv3 has been negotiated.
> > Either server has no enabled TLSv1 in ssl_protocols, or browser.


--
Igor Sysoev
http://sysoev.ru/en/

Re: compiling nginx-0.8.21 fail under linux-2.4.18

Index: auto/os/linux
===================================================================
--- auto/os/linux (revision 2578)
+++ auto/os/linux (working copy)
@@ -35,6 +35,12 @@
fi


+# posix_fadvise64() had been implemented in 2.5.60
+
+if [ $version -lt 132412 ]; then
+ have=NGX_HAVE_POSIX_FADVISE . auto/nohave
+fi
+
# epoll, EPOLLET version

ngx_feature="epoll"
On Wed, Oct 28, 2009 at 09:33:36PM -0400, testking wrote:

> objs/src/core/ngx_open_file_cache.o: In function `ngx_open_and_stat_file':
> /home/box/nginx-0.8.21/src/core/ngx_open_file_cache.c:531: warning: posix_fadvise64 is not implemented and will always fail
>
> Linux nginxbox 2.4.18-19.7.xsmp #1 SMP Thu Dec 12 07:56:58 EST 2002 i686 unknown
>
> it is ok with nginx-0.8.16.

Try the attached patch.


--
Igor Sysoev
http://sysoev.ru/en/

Re: Issue with VirtualHost definition order and SNI SSL

Thanks for the info.
I checked the browser  TLS is enabled.
Is there a special way to enable it on the server??


It is very strange for me because before Nginx i was using litespeed and there every SSL host was listening on 443 and everything worked..how do they do it i don`t know..??



2009/10/29 Igor Sysoev <is@rambler-co.ru>
On Wed, Oct 28, 2009 at 11:59:44PM +0200, Iantcho Vassilev wrote:

> Here is the debug on the host when only one site listens to 443
>
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 http check ssl handshake
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 https ssl handshake: 0x16
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: -1
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_get_error: 2

SNI handshake looks like this:

2009/10/29 09:53:05 [debug] 73997#0: *1 http check ssl handshake
2009/10/29 09:53:05 [debug] 73997#0: *1 https ssl handshake: 0x16
2009/10/29 09:53:05 [debug] 73997#0: *1 SSL server name: "www.example.com"
2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_do_handshake: -1
2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_get_error: 2

> 2009/10/29 00:55:11 [debug] 9171#0: *195388 post event 0000000001DD95A0
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 delete posted event
> 0000000001DD95A0
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL handshake handler: 0
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: 1
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL: SSLv3, cipher:
> "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1"

For some reason only SSLv3 has been negotiated.
Either server has no enabled TLSv1 in ssl_protocols, or browser.


--
Igor Sysoev
http://sysoev.ru/en/


Re: proxy pass based on IP??

No...my point is this:


i have

location / {
root ....
index.. ...
 if ($remote_addr = "xxxxxxxxxx"" {
    proxy_pass HERE
    }
}


location ~ \.php
 {
fasct_cgi .....

}

2009/10/29 Igor Sysoev <is@rambler-co.ru>
On Thu, Oct 29, 2009 at 12:16:02AM +0200, Iantcho Vassilev wrote:

> Is it possible to proxy pass a location based in source IP??

There is a limited solution:

map $remote_addr  $backend {
    default      one;
    192.168.1.1  two;
    192.168.1.2  two;
}

   location / {
       proxy_pass  htttp://$backend$request_uri;
   }


--
Igor Sysoev
http://sysoev.ru/en/


Re: proxy pass based on IP??

On Thu, Oct 29, 2009 at 12:16:02AM +0200, Iantcho Vassilev wrote:

> Is it possible to proxy pass a location based in source IP??

There is a limited solution:

map $remote_addr $backend {
default one;
192.168.1.1 two;
192.168.1.2 two;
}

location / {
proxy_pass htttp://$backend$request_uri;
}


--
Igor Sysoev
http://sysoev.ru/en/

2009年10月28日星期三

Re: Issue with VirtualHost definition order and SNI SSL

On Wed, Oct 28, 2009 at 11:59:44PM +0200, Iantcho Vassilev wrote:

> Here is the debug on the host when only one site listens to 443
>
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 http check ssl handshake
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 https ssl handshake: 0x16
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: -1
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_get_error: 2

SNI handshake looks like this:

2009/10/29 09:53:05 [debug] 73997#0: *1 http check ssl handshake
2009/10/29 09:53:05 [debug] 73997#0: *1 https ssl handshake: 0x16
2009/10/29 09:53:05 [debug] 73997#0: *1 SSL server name: "www.example.com"
2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_do_handshake: -1
2009/10/29 09:53:05 [debug] 73997#0: *1 SSL_get_error: 2

> 2009/10/29 00:55:11 [debug] 9171#0: *195388 post event 0000000001DD95A0
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 delete posted event
> 0000000001DD95A0
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL handshake handler: 0
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL_do_handshake: 1
> 2009/10/29 00:55:11 [debug] 9171#0: *195388 SSL: SSLv3, cipher:
> "DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1"

For some reason only SSLv3 has been negotiated.
Either server has no enabled TLSv1 in ssl_protocols, or browser.


--
Igor Sysoev
http://sysoev.ru/en/

non-http upstream server

Hi,

Is there any way that I can develop a nginx module that talk to a non-http upstream server? These can be remote FTP servers or database server. Any suggestion that this can be done without affecting the efficiency of the main event loop? I do not want to block on the request to the non-http upstream server.

Thanks.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,17852,17852#msg-17852

Re: 10 000 req/s: tpd2 - why it is so fast?

On Thu, Oct 29, 2009 at 11:38:17AM +0900, Zev Blut wrote:

> Hello,
>
> On 10/10/2009 01:42 AM, Igor Sysoev wrote:
> > On Fri, Oct 09, 2009 at 08:26:32PM +0400, Igor Sysoev wrote:
> >
> >> I have got these results via localhost:
> >>
> >> ab -n 30000 -c 10 ~8200 r/s
> >> ab -n 30000 -c 10 -k ~20000 r/s
> >>
> >> This means that this microbenchmark tests mostly TCP connection
> >> establishment via localhost: keepalive is 2.4 faster.
> >
> > BTW, using embedded perl:
> >
> > server {
> > listen 8010;
> > access_log off;
> >
> > location = /test {
> > perl 'sub {
> > my $r = shift;
> > $r->send_http_header("text/html");
> > $r->print("<h1>Hello ", $r->variable("arg_name"), "</h1>");
> > return OK;
> > }';
> > }
> > }
> >
> > "ab -n 30000 -c 10 -k" has got ~7800 r/s.
>
> In case you are curious, John has posted an update
> comparing teepeedee2 vs the above perl module on his laptop.
> Here is the link:
>
> http://john.freml.in/teepeedee2-vs-nginx

For some reason, he ran "ab -c1" instead of "ab -c10", while nginx may
run perl in 2 workers on Core2 Duo (if worker_processes are 2). I believe,
it will twice the benchmark result. Second, he still mosty tests TCP
connection establishment via localhost instead of server speed. Why
he can not run the benchmark with keepalive ?


--
Igor Sysoev
http://sysoev.ru/en/

Nginx benchmark result share ^^

Hi, because of teepeedee2 thread, i tried to benchmark ( nginx + spawn-fcgi + v8cgi x 1024 children ) vs ( apache2 + mod_php + php5 ) on example of my testing and development setting.

############### SIMPLE LOOP AND CONCATENATION

### bench.php
<? for($zxc=0;$zxc<999999;++$zxc) { echo ' '.$zxc; }

## time php bench.php > /dev/null
real    0m0.833s
user    0m0.712s
sys     0m0.104s

### bench.esptime
for(var zxc=0;zxc<999999;++zxc) { system.stdout(' '+zxc); }

## time v8cgi bench.esp > /dev/null
real    0m0.696s
user    0m0.668s
sys     0m0.004s

############### SIMPLE LOOP, INDEX, MATH AND CONCATENATION

### bench2.php
<? $str = array();
$str2 = ' ';
$max = 1000;
$max2 = 999999;
for($zxc=0;$zxc<$max2;++$zxc) { $str[$zxc*$zxc%$max] += $zxc*$zxc%$max; }
for($zxc=0;$zxc<$max;++$zxc) { $str2 .= $str[$zxc]; }

## time php bench2.php
real    0m0.660s
user    0m0.604s
sys     0m0.040s


### bench2.esp
var $str = [];
var $str2 = ' ';
var $max = 1000;
var $max2 = 999999;
for(var $zxc=0;$zxc<$max2;++$zxc) { $str[$zxc*$zxc%$max] += $zxc*$zxc%$max; }
for(var $zxc=0;$zxc<$max;++$zxc) { $str2 += $str[$zxc]; }

## time v8cgi bench2.esp
real    0m0.319s
user    0m0.308s
sys     0m0.008s

############### SIMPLE LONG CONCATENATION

### bench3.php
<? $str = '<table>';
$max = 999;
for($zxc=0;$zxc<$max;++$zxc) {
$str .= '<tr>';
       for($xcv=0;$xcv<$zxc;++$xcv) {
               $str .= '<td>' . $zxc . ' ' . $xcv . '</td>';
       }
$str .= '</tr>';
}
$str .= '</table>';

## time php bench3.php
real    0m0.621s
user    0m0.576s
sys     0m0.036s

### bench3.esp
var $str = '<table>';
var $max = 999;
for(var $zxc=0;$zxc<$max;++$zxc) {
$str += '<tr>';
       for(var $xcv=0;$xcv<$zxc;++$xcv) {
               $str += '<td>' + $zxc + ' ' + $xcv + '</td>';
       }
$str += '</tr>';
}
$str += '</table>';

## time v8cgi bench3.esp
real    0m0.831s
user    0m0.696s
sys     0m0.092s

############### INTERPRETER

this benchmark show that v8cgi and php quite the same except php faster on string concatenation (because javascript using ambigous "+" operator), and v8cgi faster on variable indexing (or so i guess because v8 developer said so..)

############### WEB SERVER + COMPRESSION + INTERPRETER

and the benchmark using web servers, i don't know if it's fair configuration:

############### my nginx configuration (i'm newbie):

user www-data;
worker_processes  1;

error_log  /var/log/nginx/error.log info;
pid        /var/run/nginx.pid;

events {
   worker_connections  1024;
}

http {
   include       /etc/nginx/mime.types;

   access_log  /var/log/nginx/access.log;

   sendfile        on;

   keepalive_timeout  65;
   tcp_nodelay        on;

   gzip  on;
   gzip_disable  msie6;

   include /etc/nginx/conf.d/*.conf;
   include /etc/nginx/sites-enabled/*;
}

server {
       listen   80;
       server_name  localhost;

       access_log  /var/log/nginx/localhost.access.log;
       error_log /var/log/nginx/localhost.error.log notice;

       location / {
               root /home/kyz/Projects/site;
               index  index.html index.htm;
               autoindex on;                
       }                                    

       location ~ \.(sjs|ssjs|esp)$ {
               fastcgi_pass 127.0.0.1:9000;
               fastcgi_param  SCRIPT_FILENAME  /home/kyz/Projects/site$fastcgi_script_name;
               include fastcgi_params;                                                    
       }                                                                                  

       location /doc {
               root   /usr/share;
               autoindex on;    
               allow 127.0.0.1;  
               deny all;        
       }                        

       location /images {
               root   /usr/share;
               autoindex off;    
       }                        

}

############### my apache2 configuration (i'm quite newbie too, i guess ^^ and i'm not using apache anymore) :

<VirtualHost *:80>
       ServerSignature Off
       <Directory />
               Options FollowSymLinks
               AllowOverride None
       </Directory>
       DocumentRoot /home/kyz/Projects/site
       <Directory /home/kyz/Projects/site >
               Options FollowSymLinks Indexes
               AllowOverride AuthConfig FileInfo Limit Options
               Order Allow,Deny
               Allow from All
       </Directory>
       SetOutputFilter DEFLATE
       BrowserMatch ^Mozilla/4 gzip-only-text/html
       BrowserMatch ^Mozilla/4\.0[678] no-gzip
       BrowserMatch \bMSI[E] !no-gzip !gzip-only-text/html
       AddOutputFilterByType DEFLATE text/plain
       AddOutputFilterByType DEFLATE text/xml
       AddOutputFilterByType DEFLATE application/xhtml+xml
       AddOutputFilterByType DEFLATE text/css
       AddOutputFilterByType DEFLATE application/xml
       AddOutputFilterByType DEFLATE image/svg+xml
       AddOutputFilterByType DEFLATE application/rss+xml
       AddOutputFilterByType DEFLATE application/atom_xml
       AddOutputFilterByType DEFLATE application/javascript
       AddOutputFilterByType DEFLATE application/x-javascript
       AddOutputFilterByType DEFLATE application/x-httpd-php
       AddOutputFilterByType DEFLATE application/x-httpd-fastphp
       AddOutputFilterByType DEFLATE application/x-httpd-eruby
       AddOutputFilterByType DEFLATE text/html
       DeflateFilterNote deflate_ratio
       LogFormat "%v %h %l %u %t \"%r\" %>s %b mod_deflate: %{deflate_ratio}n pct." vhost_with_deflate_info
       CustomLog /var/log/apache2/kyz_deflate_access.log vhost_with_deflate_info
       ErrorLog /var/log/apache2/kyz_error.log
       LogLevel warn
       CustomLog /var/log/apache2/kyz_access.log combined
</VirtualHost>

############### my spawn fcgi configration (1024 child):

V8C_SCRIPT="/usr/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -u www-data -g www-data -F 1024 `which v8cgi` $ESP_SCRIPT"

############### the hello someone script

the test.php script:
<h1>Hello <? echo $_GET['name']; ?></h1>

the test.esp script:
response.write('<h1>Hello '+request.get.name+'</h1>');

############### NGINX0.8 hello someone

ab -n 5000 -c 10 http://127.0.0.1/test.esp?name=john
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        nginx/0.8.19
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /test.esp?name=john
Document Length:        19 bytes

Concurrency Level:      10
Time taken for tests:   24.448 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      705000 bytes
HTML transferred:       95000 bytes
Requests per second:    204.51 [#/sec] (mean)
Time per request:       48.897 [ms] (mean)
Time per request:       4.890 [ms] (mean, across all concurrent requests)
Transfer rate:          28.16 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    0   1.9      0      48
Processing:     5   49  38.7     35     277
Waiting:        0   48  38.7     34     277
Total:          5   49  38.8     35     277

Percentage of the requests served within a certain time (ms)
 50%     35
 66%     41
 75%     47
 80%     56
 90%    113
 95%    138
 98%    166
 99%    183
100%    277 (longest request)

############### APACHE2 hello someone

ab -n 5000 -c 1 http://127.0.0.1/test.php?name=john
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        Apache/2.2.12
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /test.php?name=john
Document Length:        329 bytes

Concurrency Level:      1
Time taken for tests:   1.959 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Non-2xx responses:      5000
Total transferred:      2660000 bytes
HTML transferred:       1645000 bytes
Requests per second:    2551.83 [#/sec] (mean)
Time per request:       0.392 [ms] (mean)
Time per request:       0.392 [ms] (mean, across all concurrent requests)
Transfer rate:          1325.75 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   0.2      0       6
Waiting:        0    0   0.1      0       6
Total:          0    0   0.2      0       7

Percentage of the requests served within a certain time (ms)
 50%      0
 66%      0
 75%      0
 80%      0
 90%      0
 95%      0
 98%      0
 99%      1
100%      7 (longest request)

ab -n 5000 -c 10 http://127.0.0.1/test.php?name=john
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests


Server Software:        Apache/2.2.12
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /test.php?name=john
Document Length:        20 bytes

Concurrency Level:      10
Time taken for tests:   1.890 seconds
Complete requests:      5000
Failed requests:        4978
  (Connect: 0, Receive: 0, Length: 4978, Exceptions: 0)
Write errors:           0
Non-2xx responses:      4980
Total transferred:      2654880 bytes
HTML transferred:       1638900 bytes
Requests per second:    2645.16 [#/sec] (mean)
Time per request:       3.780 [ms] (mean)
Time per request:       0.378 [ms] (mean, across all concurrent requests)
Transfer rate:          1371.60 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    2   0.6      2       9
Processing:     1    2   0.7      2      10
Waiting:        0    2   0.7      1       9
Total:          1    4   0.8      4      11
WARNING: The median and mean for the waiting time are not within a normal deviation
       These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
 50%      4
 66%      4
 75%      4
 80%      4
 90%      4
 95%      5
 98%      5
 99%      6
100%     11 (longest request)

############### APACHE2 bench3.php max 99 with echo

ab -n 50 -c 10 http://127.0.0.1/bench3.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient).....done


Server Software:        Apache/2.2.12
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /bench3.php
Document Length:        331 bytes

Concurrency Level:      10
Time taken for tests:   0.040 seconds
Complete requests:      50
Failed requests:        1
  (Connect: 0, Receive: 0, Length: 1, Exceptions: 0)
Write errors:           0
Non-2xx responses:      50
Total transferred:      127498 bytes
HTML transferred:       116968 bytes
Requests per second:    1258.34 [#/sec] (mean)
Time per request:       7.947 [ms] (mean)
Time per request:       0.795 [ms] (mean, across all concurrent requests)
Transfer rate:          3133.50 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    2   2.8      2      12
Processing:     1    5   4.6      2      23
Waiting:        1    3   3.1      2      12
Total:          3    7   5.0      4      25

Percentage of the requests served within a certain time (ms)
 50%      4
 66%     11
 75%     12
 80%     13
 90%     14
 95%     14
 98%     25
 99%     25
100%     25 (longest request)

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient).....done


Server Software:        Apache/2.2.12
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /bench3.php
Document Length:        67840 bytes

Concurrency Level:      1
Time taken for tests:   0.201 seconds
Complete requests:      50
Failed requests:        32
   (Connect: 0, Receive: 0, Length: 32, Exceptions: 0)
Write errors:           0
Non-2xx responses:      32
Total transferred:      1241628 bytes
HTML transferred:       1231712 bytes
Requests per second:    248.32 [#/sec] (mean)
Time per request:       4.027 [ms] (mean)
Time per request:       4.027 [ms] (mean, across all concurrent requests)
Transfer rate:          6021.90 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    4   4.8      0      11
Waiting:        0    2   2.9      0      10
Total:          0    4   4.9      0      11

Percentage of the requests served within a certain time (ms)
  50%      0
  66%     10
  75%     10
  80%     10
  90%     11
  95%     11
  98%     11
  99%     11
 100%     11 (longest request)

############### NGINX0.8 bench3.esp max 99 with response.write

ab -n 50 -c 1 http://127.0.0.1/bench3.esp
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient).....done


Server Software:        nginx/0.8.19
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /bench3.esp
Document Length:        67840 bytes

Concurrency Level:      1
Time taken for tests:   2.455 seconds
Complete requests:      50
Failed requests:        0
Write errors:           0
Total transferred:      3398100 bytes
HTML transferred:       3392000 bytes
Requests per second:    20.37 [#/sec] (mean)
Time per request:       49.094 [ms] (mean)
Time per request:       49.094 [ms] (mean, across all concurrent requests)
Transfer rate:          1351.87 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    27   49  11.1     54      60
Waiting:       26   48  11.0     54      60
Total:         27   49  11.1     55      60

Percentage of the requests served within a certain time (ms)
 50%     55
 66%     55
 75%     56
 80%     57
 90%     58
 95%     60
 98%     60
 99%     60
100%     60 (longest request)

############### NGINX0.8 other benchmark (print recursively all global variables)

ab -n 2000 -c 1000 http://127.0.0.1/index.esp
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 200 requests
Completed 400 requests
Completed 600 requests
Completed 800 requests
Completed 1000 requests
Completed 1200 requests
Completed 1400 requests
Completed 1600 requests
Completed 1800 requests
Completed 2000 requests
Finished 2000 requests


Server Software:        nginx/0.8.19
Server Hostname:        127.0.0.1
Server Port:            80

Document Path:          /index.esp
Document Length:        47 bytes

Concurrency Level:      1000
Time taken for tests:   3.690 seconds
Complete requests:      2000
Failed requests:        482
   (Connect: 0, Receive: 0, Length: 482, Exceptions: 0)
Write errors:           0
Non-2xx responses:      482
Total transferred:      427652 bytes
HTML transferred:       164372 bytes
Requests per second:    542.06 [#/sec] (mean)
Time per request:       1844.801 [ms] (mean)
Time per request:       1.845 [ms] (mean, across all concurrent requests)
Transfer rate:          113.19 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   83 379.7     48    3008
Processing:    33  514 384.3    343    1295
Waiting:       30  513 384.3    342    1287
Total:        108  597 528.9    344    3378

Percentage of the requests served within a certain time (ms)
  50%    344
  66%    392
  75%   1142
  80%   1145
  90%   1150
  95%   1199
  98%   1277
  99%   3346
 100%   3378 (longest request)

############### AND????

so is it already good enough? because nginx never fail when apache mostly did (except less than 10 connections).. and if the fastcgi script was executed too slow..
btw i'm sorry if this e-mail too large ^^ i'm so excited and happy that i found a good justification for leaving apache.. ^^

Regards,
GB

Re: CPS-chained subrequests with I/O interceptions no longer work in nginx 0.8.21 (Was Re: Custom event + timer regressions caused by the new release)

On Wed, Oct 28, 2009 at 6:10 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:
> This isn't really natural (and I talked to Igor about this
> recently, probably this should be changed to simplify things), but
> that's how it works now.
>

So how to detect the end of a subrequest's output stream in a
subrequest's output filter? Like in the context of the "addition"
module? (The "addition" module explicitly disallowed any uses in a
subrequest.)

>
> One obvious error I see is that your code tries to do something
> once it passed buffer with last_buf set downstream.  This isn't
> going to work.
>

Oh, not really.

It passed buffer with last_buf set downstream, but it does bot do
anything with the *current* request object. Rather, it tries to do
something with its *parent* request object if its parent request has
not sent "last buf" yet. Well, I know it is not that obvious here ;)

In other words, it *reopens* the continuation and resumes the
execution of its parent request's content handler's tasks. And that's
exactly why I use the term "continuation" and "CPS" all over this
thread. It's a CPS chain that backtracks, rather than a strictly
linear subrequest chain that never goes back. (The latter will be
illustrated using a patched "addition" module below.)

>
> You was asked to produce reduced code for reason.  Please do so.
> Your module is huge enough to be hard to analyse, and there are
> too many ways to screw things up.
>

I'll try to produce a standalone module to demonstrate this issue.
It's nontrivial and will take some time :)

> Alternatively you may try to reproduce problem with SSI and/or
> addition filter.
>

The "addition" module does not really work in subrequests yet. In
fact, it explicitly checks r != r->main in its header filter. I've
patched it to make it work recursively in subrequests and could not
reproduce the hang:

addition_types text/plain;
location /main {
echo main;
add_after_body /sub;
}
location /sub {
echo sub;
add_after_body /sub2;
}
location /sub2 {
proxy_pass 'http://127.0.0.1:$server_port/foo';
}
location /foo {
echo foo;
}

I'm getting the expected response of /main without hanging:

main
sub
foo

So I'm guessing it's the subsubrequests I issued on the parent request
of the current subrequest that makes a difference.

Could anyone confirm the justification of following modification of
nginx 0.8.21?

--- nginx-0.8.20/src/http/ngx_http_request.c 2009-10-02
19:30:47.000000000 +0800
+++ nginx-0.8.21/src/http/ngx_http_request.c 2009-10-22
17:48:42.000000000 +0800
@@ -2235,6 +2248,8 @@
ngx_log_debug2(NGX_LOG_DEBUG_HTTP, wev->log, 0,
"http writer done: \"%V?%V\"", &r->uri, &r->args);

+ r->write_event_handler = ngx_http_request_empty_handler;
+
ngx_http_finalize_request(r, rc);
}

It causes all these troubles on my side.

Thanks!
-agentzh