2010年6月30日星期三

Re: Empty GIF generator not actually empty

On Thu, Jul 01, 2010 at 10:23:14AM +0400, Igor Sysoev wrote:

> On Thu, Jul 01, 2010 at 01:48:31AM -0400, Lorin Halpert wrote:
>
> > Using the built-in generator changes the color of the background color under
> > Chrome, yet using my own blank doesn't cause this. I've attached a "known
> > good" blank created in Fireworks (same byte size) so it can replace the one
> > in the nginx codebase. I am under windows with no development tools so I
> > can't create a patch myself but I'm able to test a new binary to validate
> > that it's fixed.
>
> The current empty GIF has 2 colors: #0: black and #1 white.
> The white (#1) color is used as background and as transparent color.
>
> The suggested GIF has two colors: #0 gray (C0C0C0) and #1 black.
> The transparent color is #0 (gray). The background color is #256

The background color is #255 ...

> and there is no such color number in the GIF table. I'm not sure
> how browsers will handle this case. Probably they use just #1 color.
>
> However, I do not understand the issue. Could you show example on
> the web where built-in GIF changes background color ?


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Empty GIF generator not actually empty

On Thu, Jul 01, 2010 at 01:48:31AM -0400, Lorin Halpert wrote:

> Using the built-in generator changes the color of the background color under
> Chrome, yet using my own blank doesn't cause this. I've attached a "known
> good" blank created in Fireworks (same byte size) so it can replace the one
> in the nginx codebase. I am under windows with no development tools so I
> can't create a patch myself but I'm able to test a new binary to validate
> that it's fixed.

The current empty GIF has 2 colors: #0: black and #1 white.
The white (#1) color is used as background and as transparent color.

The suggested GIF has two colors: #0 gray (C0C0C0) and #1 black.
The transparent color is #0 (gray). The background color is #256
and there is no such color number in the GIF table. I'm not sure
how browsers will handle this case. Probably they use just #1 color.

However, I do not understand the issue. Could you show example on
the web where built-in GIF changes background color ?


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Empty GIF generator not actually empty

Using the built-in generator changes the color of the background color under Chrome, yet using my own blank doesn't cause this. I've attached a "known good" blank created in Fireworks (same byte size) so it can replace the one in the nginx codebase. I am under windows with no development tools so I can't create a patch myself but I'm able to test a new binary to validate that it's fixed.

Re: how to deny the SSL v2.0 handshake when SSL v2.0 is disabled

On Wed, Jun 30, 2010 at 04:21:25PM -0400, Calomel Org wrote:

> Is there any way to completely disable the SSL v2.0 handshake when SSL
> v2.0 support is disabled in nginx.conf ?
>
> This is the SSL configuration used and only TLSv1 is enabled in
> "ssl_protocols".
>
> ## Nginx SSL (FIPS 140-2 experimental)
> ssl on;
> ssl_certificate /ssl_keys/host.org_ssl.crt;
> ssl_certificate_key /ssl_keys/host_ssl.key;
> ssl_ciphers DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-SHA:DES-CBC3-SHA:AES128-SHA;
> ssl_dhparam /ssl_keys/host_dh.pem;
> ssl_prefer_server_ciphers on;
> ssl_protocols TLSv1;
> ssl_session_cache shared:SSL:10m;
> ssl_session_timeout 5m;
>
> The reason this question has come up is SSL Labs has recently been in
> the news promoting a tool to check the compliance of a SSL server. We
> thought we would check our host and we ranked at the very top (93%) of
> the "Recent Best-Rated". The testing site can be found here:
>
> https://www.ssllabs.com/ssldb/index.html
>
> When we checked our server (https://calomel.org) with their tool it
> reported "SSL 2.0+ Upgrade Support" was enabled. We used the OpenSSL
> binary on the command line and found SSLv2 and SSLv3 are definitely
> turned off as Nginx denied the use of these protocols. Only TLSv1 was
> allowed.
>
> The problem is the SSLv2 upgrade support handshake is somehow accepted
> according to SSL Labs. I am not sure how to verify this handshake
> myself.
>
> According to SSL Labs "SSL 2.0+ Upgrade Support" means, "...the server
> supports SSLv2 handshake, even though it may not support SSLv2 itself.
> Essentially it's an optimization. Instead of a client first requesting
> SSLv2 (with a SSLv2 handshake) and failing (if the server does not
> support it), then having to request SSLv3 or better (with a SSLv3
> handshake), the client can use the SSLv2 handshake to indicate support
> for newer protocols." The full news group thread containing this quote
> can be found at:
>
> http://sourceforge.net/mailarchive/forum.php?thread_name=20100629171623.43012oj4b2hgrzi8%40webmail.mxes.net&forum_name=ssllabs-discuss
>
> Lastly, in order for a server to be considered "FIPS 140-2 Compliant"
> it must not respond to any SSLv2 or SSLv3 protocol requests. Only
> TLSv1 (version 1.0 to 1.2) are accepted.
>
> We appreciate any help, suggestions or clarification.

As I understand OpenSSL sources it disables SSL 2.0+ upgrade support,
only if FIPS is enabled. If you built OpenSSL with FIPS support,
then add in openssl.cnf:

openssl_conf = openssl_options

[ openssl_options ]
alg_section = algs

[ algs ]
fips_mode = yes


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

I configured nginx with php-fpm/fastcgi but not receiving headers at PHP end

I am very new to nginx, Any hints?

You can check the test page at http://74.86.84.197/t.php

As you can see at bottom of this page there are no header related _SERVER variables.

Here is my domain config file:

=============================================================
server {
        listen 74.86.84.197:80;
        server_name scrubly.com;

        location / {
                root   /avinashi/sites/scrubly.com;
                index index.php;

                # if file exists return it right away
                if (-f $request_filename) {
                        break;
                }

                # otherwise rewrite the fucker
                if (!-e $request_filename) {
                        rewrite ^(.+)$ /index.php$1 last;
                        break;
                }

        }

        # if the request starts with our frontcontroller, pass it on to fastcgi
        location ~ ^/(.+).php
        {
                fastcgi_pass 127.0.0.1:9000;
                fastcgi_param SCRIPT_FILENAME /avinashi/sites/scrubly.com$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_script_name;
                #include /usr/local/nginx/conf/fastcgi_params;

                fastcgi_connect_timeout 60;
                fastcgi_send_timeout 180;
                fastcgi_read_timeout 180;
                fastcgi_buffer_size 128k;
                fastcgi_buffers 4 256k;
                fastcgi_busy_buffers_size 256k;
                fastcgi_temp_file_write_size 256k;
                fastcgi_intercept_errors on;


        }
}
=============================================================


And here is the nginx.conf


=============================================================
worker_processes  6;

error_log  logs/error.log debug;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  0;
    #keepalive_timeout  65;

    gzip  on;
    gzip_comp_level 1;
    gzip_proxied any;
    gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/sites/*;

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
}
}
=============================================================

Any help would be great!

Regards,

Ashvin Savani
CEO & Chief Architect,
FlashBrain - A Division of Avinashi

Re: proxy_cache_use_stale

Hello!

On Wed, Jun 30, 2010 at 07:32:32PM -0400, msony wrote:

> If I understand this right if I use proxy_cache_use_stale updating and
> If I have 1000 users trying to access expired cached information. It
> will only send one request to backend server to update the cache ?

Yes.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

proxy_cache_use_stale

If I understand this right if I use proxy_cache_use_stale updating and
If I have 1000 users trying to access expired cached information. It
will only send one request to backend server to update the cache ?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,104091,104091#msg-104091


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

mod_gzip with php

Hello,

I found strange behavior for php scripts.
when server request is HEAD, I can see some garbage in response.
=========================
HEAD /server.php HTTP/1.1
Host: www.rss2search.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.6) Gecko/20100625 Firefox/3.6.6
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Content-Length: 0
Content-Type: text/plain; charset=UTF-8
 
HTTP/1.1 200 OK
Server: nginx/0.8.39
Date: Wed, 30 Jun 2010 20:00:33 GMT
Content-Type: text/html
Connection: keep-alive
X-Powered-By: PHP/5.1.6
P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTR STP IND DEM"
Content-Encoding: gzip
 
....................
this noneprintable bytes is: 1F8B080000000000000303000000000000000000

Helped to fix that disabling mod_gzip for HEAD requests in fastcgi.conf.
    if ( $request_method = HEAD ) {                                                                              
      gzip off;                                                                                                  
    }

Is this behavior expectible, if so, what best way for nginx.conf to fix this problem?

how to deny the SSL v2.0 handshake when SSL v2.0 is disabled

Is there any way to completely disable the SSL v2.0 handshake when SSL
v2.0 support is disabled in nginx.conf ?

This is the SSL configuration used and only TLSv1 is enabled in
"ssl_protocols".

## Nginx SSL (FIPS 140-2 experimental)
ssl on;
ssl_certificate /ssl_keys/host.org_ssl.crt;
ssl_certificate_key /ssl_keys/host_ssl.key;
ssl_ciphers DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:EDH-RSA-DES-CBC3-SHA:AES256-SHA:DES-CBC3-SHA:AES128-SHA;
ssl_dhparam /ssl_keys/host_dh.pem;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;

The reason this question has come up is SSL Labs has recently been in
the news promoting a tool to check the compliance of a SSL server. We
thought we would check our host and we ranked at the very top (93%) of
the "Recent Best-Rated". The testing site can be found here:

https://www.ssllabs.com/ssldb/index.html

When we checked our server (https://calomel.org) with their tool it
reported "SSL 2.0+ Upgrade Support" was enabled. We used the OpenSSL
binary on the command line and found SSLv2 and SSLv3 are definitely
turned off as Nginx denied the use of these protocols. Only TLSv1 was
allowed.

The problem is the SSLv2 upgrade support handshake is somehow accepted
according to SSL Labs. I am not sure how to verify this handshake
myself.

According to SSL Labs "SSL 2.0+ Upgrade Support" means, "...the server
supports SSLv2 handshake, even though it may not support SSLv2 itself.
Essentially it's an optimization. Instead of a client first requesting
SSLv2 (with a SSLv2 handshake) and failing (if the server does not
support it), then having to request SSLv3 or better (with a SSLv3
handshake), the client can use the SSLv2 handshake to indicate support
for newer protocols." The full news group thread containing this quote
can be found at:

http://sourceforge.net/mailarchive/forum.php?thread_name=20100629171623.43012oj4b2hgrzi8%40webmail.mxes.net&forum_name=ssllabs-discuss

Lastly, in order for a server to be considered "FIPS 140-2 Compliant"
it must not respond to any SSLv2 or SSLv3 protocol requests. Only
TLSv1 (version 1.0 to 1.2) are accepted.

We appreciate any help, suggestions or clarification.

--
Calomel @ https://calomel.org
Open Source Research and Reference


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: proxy_no_cache issue

Hi,

> Any idea on how to do this properly ?

AFAIK your backend must set "X-Accel-Expires" header with value > 0 to force
cache update in "_no_cache" scenario.

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

proxy_no_cache issue

I understand you can use proxy_no_cache to update the cache, my setup is
like this:
proxy_cache_valid 200 302 20m;
proxy_cache_valid 404 1m;
proxy_no_cache $http_my_secret_header;

And I send the header like this : curl -H "My-Secret-Header: 1"
"www.example.com"

Which seems to pull the updated information, however if I understand it
properly it should update the cache with latest response. But it's not
doing that, all the new hits are pulling information from cache till
actually cache expires. Any idea on how to do this properly ?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,104020,104020#msg-104020


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Setting headers for negative caching

Luca De Marinis Wrote:
-------------------------------------------------------
> On Wed, Jun 30, 2010 at 5:45 PM, bkirkbri wrote:
>
> >> I believe that even if you do, user agents and
> >> intermediate caches /
> >> proxies may decide not to honour them, so it
> may
> >> be pointless.
> >> Regards
> >
> > That's true in some cases, definitely.  I'd be
> interested in a list of
> > which browsers respect Cache-Control for non-200
> repsonses...
> >
> > But we might throw a reverse proxy cache in
> front of nginx, which would
> > respect those headers and take load off the
> nginx machine.  Check out
> >
> http://degizmo.com/2010/03/25/why-you-should-upstr
> eam-cache-your-404s/
>
> Interesting point, but I believe this is violating
> standards for
> performance (or maybe not, I don't know what the
> http rfc says about
> it), so I'd personally do it only for very good
> reasons; at our sites
> we don't get that many 404s so not caching them
> for us is preferable.
> Anyway if plan on using a reverse proxy it all
> changes because then
> you have fine control on when to purge a certain
> url, which obviously
> you don't have when your headers say "I can be
> saved by anything in
> between for 20 minutes". Even then, when I had to
> instruct a reverse
> proxy, I found it more convenient to use a custom
> header rather than
> munging cache-control, but my scenario was a bit
> different (I usually
> want caching to happen on my proxy, unless some
> conditions are met,
> and never want intermediate caches or UA's to
> presume they can cache
> my dynamic content).
>
> Bye

All good advice, thanks.

For what it's worth, the HTTP spec does allow caching of 404, 302, 301,
etc. responses if the Cache-Control header is set explicitly by the
origin. The spec forbids any caching of these responses without a
Cache-Control header though, which is in contrast to the allowed
behavior of caching 200 responses that do not have Cache-Control headers
for some reasonable amount of time.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,103267,103983#msg-103983


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Setting headers for negative caching

On Wed, Jun 30, 2010 at 5:45 PM, bkirkbri <nginx-forum@nginx.us> wrote:

>> I believe that even if you do, user agents and
>> intermediate caches /
>> proxies may decide not to honour them, so it may
>> be pointless.
>> Regards
>
> That's true in some cases, definitely.  I'd be interested in a list of
> which browsers respect Cache-Control for non-200 repsonses...
>
> But we might throw a reverse proxy cache in front of nginx, which would
> respect those headers and take load off the nginx machine.  Check out
> http://degizmo.com/2010/03/25/why-you-should-upstream-cache-your-404s/

Interesting point, but I believe this is violating standards for
performance (or maybe not, I don't know what the http rfc says about
it), so I'd personally do it only for very good reasons; at our sites
we don't get that many 404s so not caching them for us is preferable.
Anyway if plan on using a reverse proxy it all changes because then
you have fine control on when to purge a certain url, which obviously
you don't have when your headers say "I can be saved by anything in
between for 20 minutes". Even then, when I had to instruct a reverse
proxy, I found it more convenient to use a custom header rather than
munging cache-control, but my scenario was a bit different (I usually
want caching to happen on my proxy, unless some conditions are met,
and never want intermediate caches or UA's to presume they can cache
my dynamic content).

Bye

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: proxy_cache fills harddrive despite max_size being set

I'm seeing something similar. du -sb returns 1411480500 bytes, even
though I have specified 1024 MB as max size.

nginx 0.7.62 on ubuntu 9.10 server.

Here is my proxy_cache config:
proxy_cache_path /var/spool/nginx_proxy_cache
levels=1:2
keys_zone=zone1:10m
inactive=7d
max_size=1024m;

Any ideas? Is it the inactive period of 7d that controls when the
cache manager process runs or something?

--
RPM

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Setting headers for negative caching

Luca De Marinis Wrote:
-------------------------------------------------------
> On Mon, Jun 28, 2010 at 6:54 PM, bkirkbri wrote:
>
> > Is it possible to set Cache-Control / Expires
> headers for 404 responses
> > in Nginx?
>
> I believe that even if you do, user agents and
> intermediate caches /
> proxies may decide not to honour them, so it may
> be pointless.
> Regards

That's true in some cases, definitely. I'd be interested in a list of
which browsers respect Cache-Control for non-200 repsonses...

But we might throw a reverse proxy cache in front of nginx, which would
respect those headers and take load off the nginx machine. Check out
http://degizmo.com/2010/03/25/why-you-should-upstream-cache-your-404s/

Best,
Brian

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,103267,103933#msg-103933


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

nginx-0.8.43

Changes with nginx 0.8.43 30 Jun 2010

*) Feature: large geo ranges base loading speed-up.

*) Bugfix: an error_page redirection to "location /zero { return 204;
}" without changing status code kept the error body; the bug had
appeared in 0.8.42.

*) Bugfix: nginx might close IPv6 listen socket during
reconfiguration.
Thanks to Maxim Dounin.

*) Bugfix: the $uid_set variable may be used at any request processing
stage.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Setting headers for negative caching

On Mon, Jun 28, 2010 at 6:54 PM, bkirkbri <nginx-forum@nginx.us> wrote:

> Is it possible to set Cache-Control / Expires headers for 404 responses
> in Nginx?

I believe that even if you do, user agents and intermediate caches /
proxies may decide not to honour them, so it may be pointless.
Regards

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Streaming content generated on the fly with nginx

Hi all,

I'm working on a project for which I need to render on the fly and
serve an "endless" mp3 stream (think of a webradio... in which the
audio content is generated automatically). The use case would be:

an HTTP GET on /create_stream returns a token id
an HTTP GET on /stream?id=token serves an endless chain of audio buffers
an HTTP GET on /set_param?id=token&key=value alters one of the audio
rendering settings of the stream.

Latency/pre-buffering should be in the 0.5s-2s ballpark. So ideally,
every 0.5s (or everytime we know the client has consumed 50% of what
was generated during the previous call), some rendering code should be
called.

I'd rather rely on an existing networking I/O infrastructure rather
than rolling my own socket server, so I was exploring the possibility
of doing this with an nginx module. However, I'm not sure how to do
this. It doesn't seem to fit the handler model well, since, from what
I understood, the handler sets up a chain of return buffers, returns
immediately, and doesn't have anything more to say about this chain of
buffers. What I would like to do, instead, would be to generate in the
handler for the "/streaming" request a chain with a couple of buffers
; and also specify a callback that would be called every time the last
but one buffer in the chain has been sent to the client. Is there an
easy way of achieving that?

Another option I thought of would be to reuse something like the
ngx_http_static_module.c, generate a memory buffer with the mp3 header
; and a file buffer referencing the fd of a named pipe ; with a bunch
of processes in parallel scanning all the opened named pipes and
filling them up. If things go well, when time will come to write the
file to the socket, nginx will read and stream as much as possible
from the FIFO and move to something else until the FIFO will become
readable again? In this situation, how do I handle a dropped
connection? Doesn't sound like a good idea to me...

Or would it be possible to do that by abusing the "upstream" plug-in
infrastructure for this aplication?

I'm really looking for any solution to this "server continually sends
a packet every 0.5s to a http connection kept open" problem.

Best,
Olivier

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx rewrite rules conditional once again

On Wed, Jun 30, 2010 at 03:44:44PM +0530, Rahul Bansal wrote:

> >
> > You think in a backward logic. Try a forward logic: what should be done for
>
>
> Agree with this!
>
> As a general rules, when writing nginx config, we better think from *scratch
> *.
> It happened to me that I came across some very complex apache rules and
> ended up sitting idle because I couldn't find their direct conversion in
> nginx.
> But then one day, I just analyzed input used and output produced by apache
> rules and in few hours I solved that problem is myself.
>
> Switching from apache to nginx is as complex as switching form windows to
> linux/mac.
> If you start solving any problem with old knowledge you will always find
> things difficult. ;-)

This is not Apache issue, this is a wrong practice to configure anything
with sendmail-style RewriteRules. BTW Apache is built by default without
mod_rewrite, you have to --enable-rewrite.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx rewrite rules conditional once again

You think in a backward logic. Try a forward logic: what should be done for

Agree with this! 

As a general rules, when writing nginx config, we better think from scratch
It happened to me that I came across some very complex apache rules and ended up sitting idle because I couldn't find their direct conversion in nginx.
But then one day, I just analyzed input used and output produced by apache rules and in few hours I solved that problem is myself.

Switching from apache to nginx is as complex as switching form windows to linux/mac.
If you start solving any problem with old knowledge you will always find things difficult.  ;-)

-Rahul

Re: nginx push question

Hi Ian.

Thanks a lot for your response.

I knew I can do this way but as said in my question :

> I don't wan't each user to have as many request pending as subscribed
> channels ...

:)

I suppose it is effectively actually the only way but I think it would
not be hard to have something less resources consuming.
I'll try to look at the module but I'm not a c programmer (I only have
Java skills).

Thanks.

Mike Baroukh

---
Cardiweb - 29 Cite d'Antin Paris IXeme
+33 6 63 57 27 22 / +33 1 53 21 82 63
http://www.cardiweb.com/
---


Le 30/06/2010 11:47, Ian Hobson a écrit :
> On 30/06/2010 09:55, Mike Baroukh wrote:
>>
>> Hi.
>>
>> I found the nginx_push_module recently and I'm trying to find if it
>> can fill my needs.
>> It is really fantastic when 1 user subscribe to 1 stream.
>>
>> But, can someone tell me if there is a way for one user to subscribe
>> with one request to many channel ?
>> I don't think because in the activity http response, there is no
>> header with he channel id to distinguish responses :
>>
> Hi Mike,
>
> The way to do this is to have two httprequests outstanding, one for
> each channel.
>
> If you set them up with different callbacks for when a response
> arrives then the messages for each channel will flow through
> independently of the other.
>
> Regards
>
> Ian
>
>
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx push question

On 30/06/2010 09:55, Mike Baroukh wrote:
>
> Hi.
>
> I found the nginx_push_module recently and I'm trying to find if it
> can fill my needs.
> It is really fantastic when 1 user subscribe to 1 stream.
>
> But, can someone tell me if there is a way for one user to subscribe
> with one request to many channel ?
> I don't think because in the activity http response, there is no
> header with he channel id to distinguish responses :
>
Hi Mike,

The way to do this is to have two httprequests outstanding, one for each
channel.

If you set them up with different callbacks for when a response arrives
then the messages for each channel will flow through independently of
the other.

Regards

Ian

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

nginx push question

Hi.

I found the nginx_push_module recently and I'm trying to find if it can
fill my needs.
It is really fantastic when 1 user subscribe to 1 stream.

But, can someone tell me if there is a way for one user to subscribe
with one request to many channel ?
I don't think because in the activity http response, there is no header
with he channel id to distinguish responses :

GET /activity?id=2 HTTP/1.1
User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7
OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15
Host: 127.0.0.1:8080
Accept: */*


HTTP/1.1 200 OK
Server: nginx/0.7.67
Date: Wed, 30 Jun 2010 08:34:50 GMT
Content-Type: application/x-www-form-urlencoded
Content-Length: 4
Last-Modified: Wed, 30 Jun 2010 08:34:01 GMT
Connection: keep-alive
Etag: 0
Vary: If-None-Match, If-Modified-Since

test

I suppose that a header like
Channel-id: 2
wouldn't be hard to add but would it be hard to allow to subscribe with
a request like
GET /activity?id=1&id=3&id=36...
?

I'm trying to use nginx on a mobile application where users can
subscribe to many, many streams.
But I don't wan't, if I can, to make a channel per user and I don't
wan't each user to have as many request pending as subscribed channels ...

A channel per user would be possible but
- I must keep on server side which user subscribed to which channel
- I must handle case where users does not exist anymore to stop wasting
time trying to send them data ...


Any idea of how I can accomplish this actually ?
Maybe I missed something ?


Thanks anyway for this module !

--

Mike Baroukh

---
Cardiweb - 29 Cite d'Antin Paris IXeme
+33 6 63 57 27 22 / +33 1 53 21 82 63
http://www.cardiweb.com/
---

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Redirect ends up in a loop

On Wed, Jun 30, 2010 at 03:45:27AM -0400, shainp wrote:

> Hello!
>
> I wanted to use NginxHttpMapModule and NginxRedirect to redirect static
> links to dynamic php links.
>
> I want to redirect http://my_domain/static/static_page1.html to
> http://domain_name/dynamic/zone.php?zoneid=86 by looking up the url from
> a map table.
>
> I tried this but it ends in a redirect loop.
>
> [code]
> map_hash_bucket_size 256;
> map $uri $dynamic_url {
> default 42;
> /static/static_page1.html 86;
> /static/static_page2.html 36;
> }
> server {
> listen 80;
> server_name _;
> rewrite ^
> http://domain_name/dynamic/zone.php?zoneid=$dynamic_url break;
> }
> [/code]

Do you want to proxy or redirect ?

- rewrite ^ http://domain_name/dynamic/zone.php?zoneid=$dynamic_url break;
+ rewrite ^ http://domain_name/dynamic/zone.php?zoneid=$dynamic_url;


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Redirect ends up in a loop

Hello!

I wanted to use NginxHttpMapModule and NginxRedirect to redirect static
links to dynamic php links.

I want to redirect http://my_domain/static/static_page1.html to
http://domain_name/dynamic/zone.php?zoneid=86 by looking up the url from
a map table.

I tried this but it ends in a redirect loop.

[code]
map_hash_bucket_size 256;
map $uri $dynamic_url {
default 42;
/static/static_page1.html 86;
/static/static_page2.html 36;
}
server {
listen 80;
server_name _;
rewrite ^
http://domain_name/dynamic/zone.php?zoneid=$dynamic_url break;
}
[/code]

Please help me with this.
Regards,
Shain

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,103815,103815#msg-103815


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年6月29日星期二

Re: nginx rewrite rules conditional once again

On Tue, Jun 29, 2010 at 05:27:20PM +0200, Malte Geierhos wrote:

> Hi List,
>
> I've got to convert some Apache Rewrite Rules to work with nginx.
> And i got kind of stuck in between how to solve this.
>
> The old rewrite rule is like this :
>
> RewriteCond %{REQUEST_URI} !/[0-9]+$
> RewriteCond %{REQUEST_URI} ^/(articles|people)/favorites
> RewriteRule ^(articles|people)/(.*)/$ /$1/$2/1/25 [R=301,L]
>
> ### /fragen/beliebte
> RewriteCond %{REQUEST_URI} !/[0-9]+$
> RewriteCond %{REQUEST_URI} ^/(articles|people)/favorites
> RewriteRule ^(articles|people)/(.*) /$1/$2/1/25 [R=301,L]
>
> so basically its catching those with ^/articles/favorites/what and
> ^/articles/favorites/
> and append /1/25 and most important - ignore when there is already /2/40
> or whatever.
>
> At first i was looking into solving it with a location with something
> like :
>
> location ~* ^/(articles|people)([^/]*)$ {
> rewrite ^/(articles|people)/(.*)$ /$1/$2/1/25;
> }
>
> but this did not work out as expected.
> My next idea was to try to catch $2/$3 and see if its a number
>
> like :
>
> location ~* ^/(articles|people)/(.*)$ {
> if ( $request_arg !~ [0-9]) {
> rewrite ^/(articles|people)/(.*)$ /$1/$2/1/25;
> }
> }
>
> hm but now i'm stuck.
> Anyone got an idea ?

You think in a backward logic. Try a forward logic: what should be done for
/articles/favorites/1/25
and
/articles/favorites/2/40
?

location ~ ^/(articles|people)/[0-9]/[0-9]$ {
...
}

location ~ ^/(articles|people)/(.*)$ {
rewrite ^/(articles|people)/(.*)$ /$1/$2/1/25 permanent;

# or 0.8.42+:
#return 301 http://$1/$2/1/25;
}


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx rewrite rules conditional once again

On Tue, Jun 29, 2010 at 10:27 PM, Malte Geierhos <malte@snapscouts.de> wrote:
> My next idea was to try to catch $2/$3 and see if its a number
>
> like :
>
> location ~* ^/(articles|people)/(.*)$ {
>         if ( $request_arg !~  [0-9]) {
>               rewrite ^/(articles|people)/(.*)$  /$1/$2/1/25;
>        }
> }
>
> hm but now i'm stuck.
> Anyone got an idea ?
>

location ~* ^/(articles|people)/([^0-9]+)$ { ... }

--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

nginx rewrite rules conditional once again

Hi List,

I've got to convert some Apache Rewrite Rules to work with nginx.
And i got kind of stuck in between how to solve this.

The old rewrite rule is like this :

RewriteCond %{REQUEST_URI} !/[0-9]+$
RewriteCond %{REQUEST_URI} ^/(articles|people)/favorites
RewriteRule ^(articles|people)/(.*)/$ /$1/$2/1/25 [R=301,L]

### /fragen/beliebte
RewriteCond %{REQUEST_URI} !/[0-9]+$
RewriteCond %{REQUEST_URI} ^/(articles|people)/favorites
RewriteRule ^(articles|people)/(.*) /$1/$2/1/25 [R=301,L]

so basically its catching those with ^/articles/favorites/what and
^/articles/favorites/
and append /1/25 and most important - ignore when there is already /2/40
or whatever.

At first i was looking into solving it with a location with something
like :

location ~* ^/(articles|people)([^/]*)$ {
rewrite ^/(articles|people)/(.*)$ /$1/$2/1/25;
}

but this did not work out as expected.
My next idea was to try to catch $2/$3 and see if its a number

like :

location ~* ^/(articles|people)/(.*)$ {
if ( $request_arg !~ [0-9]) {
rewrite ^/(articles|people)/(.*)$ /$1/$2/1/25;
}
}

hm but now i'm stuck.
Anyone got an idea ?

regards,
Malte


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

@Mark
Use php-fpm and setup one pool per user. That's all the suexec you need :)

You hit the bulls-eye!
This is something I was looking for so long.
And after my first use of php-fpm 2-3 days back, this "pool" setting caught my eyes.

We have already written php-cli scripts to create virtual hosts (domain config) in nginx. Also we wrote some scripts for one-click wp installer as well as apache to nginx migration.

Separation between 2 users php env was next big challenge we were facing.
I hope our company will be able to complete a mini-control-panel for nginx soon! ;-)

Thanks,
-Rahul

2010年6月28日星期一

Setting headers for negative caching

Is it possible to set Cache-Control / Expires headers for 404 responses
in Nginx?

I've tried using an error_page like this:

[code]
location ^~ /x/ {
error_page 404 /not_found.html;
}
location /not_found.html {
internal;

root /static/errordoc;
expires 10m;
add_header Cache-Control "private, max-age=600";
}
[/code]

But no luck.

Thanks in advance!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,103267,103267#msg-103267


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

On 28/06/10 17:27, Michael Shadle wrote:
> I wrote that. :)
>
> You can use the adaptive spawning in php 5.3 to try to craft a way to give each user a process to start at least and then a quota for each user and let them play in that. That's basically the best suggestion I personally have. And just script in something to add to the config for each user and reload php-fpm.
>

Sorry, wasn't paying attention!

One to investigate further, definitely. To be honest, I've never even
played with FastCGI (still an Apache mod_php user, need to
progress....), looks like I have some fun ahead of me!

--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

I wrote that. :)

You can use the adaptive spawning in php 5.3 to try to craft a way to give each user a process to start at least and then a quota for each user and let them play in that. That's basically the best suggestion I personally have. And just script in something to add to the config for each user and reload php-fpm.

On Jun 28, 2010, at 9:16 AM, Mark Rogers <mark@quarella.co.uk> wrote:

> On 28/06/10 16:57, Michael Shadle wrote:
>> Use php-fpm and setup one pool per user. That's all the suexec you need :)
>>
>
> Should I be concerned by the comment on their website:
>
> "It was not designed with virtual hosting in mind (large amounts of pools) however it can be adapted for any usage model."
>
> --
> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
> Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

On 28/06/10 16:57, Michael Shadle wrote:
> Use php-fpm and setup one pool per user. That's all the suexec you need :)
>

Should I be concerned by the comment on their website:

"It was not designed with virtual hosting in mind (large amounts of
pools) however it can be adapted for any usage model."

--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

Use php-fpm and setup one pool per user. That's all the suexec you need :)

On Jun 28, 2010, at 1:26 AM, Mark Rogers <mark@quarella.co.uk> wrote:

> On 27/06/10 15:10, Rahul Bansal wrote:
>>> > I note from the documentation that it is fairly simple to run multiple
>>> > instances of nginx behind a proxy to allow different virtual hosts to be
>>> > managed as different users (to prevent code on one site having read/write
>>> > access to other sites).
>>>
>> Can u please share link to the page where you find that info?
>> I am, from long time, thinking about replacing apache with nginx in
>> shared-hosting environment.
>>
>
> I may have overstated the "fairly simple" as I haven't found specific documentation, but I was referring to the comments at
> http://wiki.nginx.org/NginxFaq
> (extract below). I would like to see an example configuration myself if anyone can point me in the right direction.
>
> ---
> *Is support for chroot planned?*
>
> Unknown at this time. Unless/until that changes, you can achieve a similar - or better - effect by using OS-level features (e.g. BSD Jails, OpenVZ w/ proxyarp on Linux, etc.).
>
> *What about support for something like mod_suexec? What about support for something like mod_suexec?*
>
> mod_suexec is a solution to a problem that Nginx does not have. When running servers such as Apache, each instance consumes a significant amount of RAM, so it becomes important to only have a monolithic instance that handles all one's needs. With Nginx, the memory and CPU utilization is so low that running dozens of instances of it is not an issue.
>
> A comparable Nginx setup to Apache + mod_suexec is to run a separate instance of Nginx as the CGI script user (i.e. the user that would have been specified as suexec user under Apache), and then proxy to that from the main Nginx instance.
>
> Alternatively, PHP could simply be executed through FastCGI, which itself would be running under a CGI script user account. (Note that mod_php - the module suexec is normally utilized to defend against - does not exist with Nginx.)
> ---
>
>
> --
> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
> Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: location case sensitivity

Hello,

I would need a more general solution to this problem (and I need to keep
my filesystem case sensitive). Sometimes, owners of websites use caps to
make their urls more readable. For instance, they would write "website:
www.gallery.com/TheArtClub.html" on their business card. Of course, it
is sometimes difficult to anticipate their usage of capital letters. And
I find it difficult to explain them that they should never user capital
letters in print materials...

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,103025,103153#msg-103153


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: location case sensitivity

On 28/06/10 12:18, Igor Sysoev wrote:
> Unix file systems are case sensitive (except MacOS Extended file system).
> if you want to handle only several files in this way, then:
>
> location = /GO.html {
> alias /path/to/go.html;
> }
>

I've never tried this, but there is the "Case Insensitivity On Purpose
File System", ciopfs:
http://www.brain-dump.org/projects/ciopfs/

It's a FUSE layer on top of your existing filesystem so may be an
option. It's in the Ubuntu repos, not sure about others.

--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: location case sensitivity

On Sun, Jun 27, 2010 at 05:17:43PM -0400, JCR wrote:

> Hello
>
> On a centos 5 box running the latest nginx, I am struggling with case
> sensitivity:
> I have in root the file go.html
> and I want the request for file GO.html server the file go.html
>
> I thought that
> something like
> [code]
> 45 location ~* / {
> 46 index index.html index.htm;
> 47 }
> [/code]
> would do the trick but it doesn't.
>
> What are the main strategies to achieve this result?

Unix file systems are case sensitive (except MacOS Extended file system).
if you want to handle only several files in this way, then:

location = /GO.html {
alias /path/to/go.html;
}


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Video files truncated

Hi,

I was reported some issues regarding videos playback. MP4 videos are delivered by nginx. Some clients are complaining because videos stop after 5 seconds. Our videos are at least 50s long. I cannot reproduce this issue even with an old PC with 3G connection (slow bandwith).

I guess the video stops because the connection is closed by the server but I do not understand why. When checking my configuration, I found a send_timeout 5; Can this be in cause?
Thks a lot

Axel
_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

On 27/06/10 15:10, Rahul Bansal wrote:
>> > I note from the documentation that it is fairly simple to run multiple
>> > instances of nginx behind a proxy to allow different virtual hosts to be
>> > managed as different users (to prevent code on one site having read/write
>> > access to other sites).
>>
> Can u please share link to the page where you find that info?
> I am, from long time, thinking about replacing apache with nginx in
> shared-hosting environment.
>

I may have overstated the "fairly simple" as I haven't found specific
documentation, but I was referring to the comments at
http://wiki.nginx.org/NginxFaq
(extract below). I would like to see an example configuration myself if
anyone can point me in the right direction.

---
*Is support for chroot planned?*

Unknown at this time. Unless/until that changes, you can achieve a
similar - or better - effect by using OS-level features (e.g. BSD Jails,
OpenVZ w/ proxyarp on Linux, etc.).

*What about support for something like mod_suexec? What about support
for something like mod_suexec?*

mod_suexec is a solution to a problem that Nginx does not have. When
running servers such as Apache, each instance consumes a significant
amount of RAM, so it becomes important to only have a monolithic
instance that handles all one's needs. With Nginx, the memory and CPU
utilization is so low that running dozens of instances of it is not an
issue.

A comparable Nginx setup to Apache + mod_suexec is to run a separate
instance of Nginx as the CGI script user (i.e. the user that would have
been specified as suexec user under Apache), and then proxy to that from
the main Nginx instance.

Alternatively, PHP could simply be executed through FastCGI, which
itself would be running under a CGI script user account. (Note that
mod_php - the module suexec is normally utilized to defend against -
does not exist with Nginx.)
---


--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年6月27日星期日

location case sensitivity

Hello

On a centos 5 box running the latest nginx, I am struggling with case
sensitivity:
I have in root the file go.html
and I want the request for file GO.html server the file go.html

I thought that
something like
[code]
45 location ~* / {
46 index index.html index.htm;
47 }
[/code]
would do the trick but it doesn't.

What are the main strategies to achieve this result?

thank you!

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,103025,103025#msg-103025


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

how to clear cache

Hi nginx

 We running ROR + nginx +mongrel_cluster server. cache not cleared. The same setup using in apache server but  working fine.. .

how to resolve this problem.



Thanks and Regards,
R.Karthik

Re: Nginx startup scripts

Hi Mark,

> I note from the documentation that it is fairly simple to run multiple
> instances of nginx behind a proxy to allow different virtual hosts to be
> managed as different users (to prevent code on one site having read/write
> access to other sites).

Can u please share link to the page where you find that info?
I am, from long time, thinking about replacing apache with nginx in
shared-hosting environment.

Thanks,
-Rahul

On Thu, Jun 24, 2010 at 9:04 PM, Mark Rogers <mark@quarella.co.uk> wrote:
> On 24/06/10 15:25, Igor Sysoev wrote:
>>
>> You can run "nginx -t" before applying configuration: it catches almost
>> all
>> possible errors except some fatal errors: no memory, files, etc.
>> If you send -HUP signal to reconfigure and a new configuration is bad,
>> then nginx continues to run with an old configuration, if no fatal
>> errors will happen. SSL certificate without key case is not the fatal
>> error.
>>
>
> I don't think I could have hoped for a better answer - thank you very much!
>
> I note from the documentation that it is fairly simple to run multiple
> instances of nginx behind a proxy to allow different virtual hosts to be
> managed as different users (to prevent code on one site having read/write
> access to other sites). Is this the best way to achieve this, and if so how
> easy is it to set up? (Eg: do the startup scripts support it, similar to how
> MySQL's mysqld_multi startup script do?)
>
> It looks like I will be setting up a test server to see how I can migrate my
> Apache configuration to ngigx. My existing virtual host is using Ubuntu
> 8.04, but this has only nginx_0.6.35. Ubuntu 10.04 only has nginx_0.7.65. I
> don't really want to roll my own (because I prefer to have a repository that
> I can trust to keep on top of security updates). So, what is the best way
> forward for me?
>
> --
> Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
> Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Optimizing worker_processes, worker_connections & PHP_FCGI_CHILDREN - Any Good Tutorial?

Hi Eric,

> Yes! wondering if they all work together. hoping that it will not
> sublime it's ops.

I decided NOT to use memcache. Memcache is good for distributed environment.

I am using APC for php opcode cache and user-data as well.
In case of w3 total cache plugin, APC can also be used as page-cache
instead of hard-disk.

Now only confusion, I am in is nginx fastcgi_cache.
In theory, and from what I have read, if nginx fastcgi_cache serves
request, no load should be forwarded to php-fpm or php-fastcgi (in
short, php part).
I guess with APC also, by using it as page-cache, we are achieving
same thing. Not sure though!

-Rahul


On Sun, Jun 27, 2010 at 7:15 AM, ericdeko411 <nginx-forum@nginx.us> wrote:
> rahul286 Wrote:
> -------------------------------------------------------
>> Thanks All.
>> Moved to PHP-FPM and also removed (purged)
>> unwanted php-extensions.
>>
>> I am using php 5.3.2 and I read somewhere that pm
>> = dynamic feature should be used in PHP-FPM only
>> if PHP > 5.3.3.
>>
>> Should I wait , or go ahead with Reinis Rozitis
>> settings...
>>
>> > pm = dynamic
>> > pm.max_children = 70
>> > pm.start_servers = 20
>> > pm.min_spare_servers = 5
>> > pm.max_spare_servers = 20
>> > pm.max_requests = 1000
>>
>> I am also thinking of using nginx fastcgi_cache
>>
>> But started to feel I will be having too many
>> cache levels after that....
>>
>> nginx fastcgi_cache. apc, memcache and wordpress
>> caching plugins.
>>
>> Will they work all together smoothly or will
>> become counter-productive?
>>
>> Any suggestions?
>>
>> Thanks again,
>> -Rahu
>
> Yes! wondering if they all work together. hoping that it will not
> sublime it's ops.
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,98498,102818#msg-102818
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年6月26日星期六

Re: Optimizing worker_processes, worker_connections & PHP_FCGI_CHILDREN - Any Good Tutorial?

rahul286 Wrote:
-------------------------------------------------------
> Thanks All.
> Moved to PHP-FPM and also removed (purged)
> unwanted php-extensions.
>
> I am using php 5.3.2 and I read somewhere that pm
> = dynamic feature should be used in PHP-FPM only
> if PHP > 5.3.3.
>
> Should I wait , or go ahead with Reinis Rozitis
> settings...
>
> > pm = dynamic
> > pm.max_children = 70
> > pm.start_servers = 20
> > pm.min_spare_servers = 5
> > pm.max_spare_servers = 20
> > pm.max_requests = 1000
>
> I am also thinking of using nginx fastcgi_cache
>
> But started to feel I will be having too many
> cache levels after that....
>
> nginx fastcgi_cache. apc, memcache and wordpress
> caching plugins.
>
> Will they work all together smoothly or will
> become counter-productive?
>
> Any suggestions?
>
> Thanks again,
> -Rahu

Yes! wondering if they all work together. hoping that it will not
sublime it's ops.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,98498,102818#msg-102818


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Optimizing worker_processes, worker_connections & PHP_FCGI_CHILDREN - Any Good Tutorial?

Thanks All.
Moved to PHP-FPM and also removed (purged) unwanted php-extensions.

I am using php 5.3.2 and I read somewhere that pm = dynamic feature
should be used in PHP-FPM only if PHP > 5.3.3.

Should I wait , or go ahead with Reinis Rozitis settings...

> pm = dynamic
> pm.max_children = 70
> pm.start_servers = 20
> pm.min_spare_servers = 5
> pm.max_spare_servers = 20
> pm.max_requests = 1000

I am also thinking of using nginx fastcgi_cache

But started to feel I will be having too many cache levels after
that....

nginx fastcgi_cache. apc, memcache and wordpress caching plugins.

Will they work all together smoothly or will become counter-productive?

Any suggestions?

Thanks again,
-Rahul

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,98498,102792#msg-102792


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

possible to rewrite based on referring url?

here is an example called url:
http://myserver.com/index.php?id=15

I would like to change the id= based on referrer....

so I will have a list of referers....

if referer url contains moo/cow.php then rewrite to:
http://myserver.com/index.php?id=15

if referer url contains milk/cookies.php then rewrite to:
http://myserver.com/index.php?id=16

Thanks,

-Eric

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,102778,102778#msg-102778


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: error in nginx-0.8.42: [emerg]: mkdir() "/usr/local/nginx/uwsgi_temp"failed

Hi,

> As Debian does not use /usr/local/nginx/, it seems that something
> named UWSGI is not following the file placing rules as it should.

uWSGI (uwsgi?) follows same rules as client_body/FastCGI/log/proxy paths.
This means that starting with nginx-0.8.41 you should add
"--http-uwsgi-temp-path" to the ./configure options and starting with
nginx-0.8.42 you should also add "--http-scgi-temp-path".

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: error in nginx-0.8.42: [emerg]: mkdir() "/usr/local/nginx/uwsgi_temp" failed

On Sat, Jun 26, 2010 at 11:24 AM, M. Alan <varia@e-healthexpert.org> wrote:
> Up until (at least) nginx-0.8.37 I was able to rebuild the nginx
> Ubuntu package without any problems (I have previously posted in this list
> the procedure to rebuild a Debain/Ubuntu package).
> Using the nginx-0.8.42.tar.gz sources I am able to build
> a nginx_0.8.42-0ubuntu1_i386.deb package that, once installed, gives
> the following error while trying to run nginx:
> sudo /usr/sbin/nginx -t
> the configuration file /etc/nginx/nginx.conf syntax is ok
> [emerg]: mkdir() "/usr/local/nginx/uwsgi_temp" failed (2: No such file
>  or directory) configuration file /etc/nginx/nginx.conf test failed
> As Debian does not use /usr/local/nginx/, it seems that something
> named UWSGI is not following the file placing rules as it should.

Looks like you forgot to pass correct uwsgi-temp-path to configure script.

--
Boris Dolgov.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

error in nginx-0.8.42: [emerg]: mkdir() "/usr/local/nginx/uwsgi_temp" failed

Up until (at least) nginx-0.8.37 I was able to rebuild the nginx Ubuntu package without any problems (I have previously posted in this list the procedure to rebuild a Debain/Ubuntu package).

Using the nginx-0.8.42.tar.gz sources I am able to build a nginx_0.8.42-0ubuntu1_i386.deb package that, once installed, gives the following error while trying to run nginx:

sudo /usr/sbin/nginx -t
the configuration file /etc/nginx/nginx.conf syntax is ok
[emerg]: mkdir() "/usr/local/nginx/uwsgi_temp" failed (2: No such file
 or directory) configuration file /etc/nginx/nginx.conf test failed

As Debian does not use /usr/local/nginx/, it seems that something named UWSGI is not following the file placing rules as it should.

M.

2010年6月25日星期五

Re: send() function failing kills the worker process

Hello!

On Thu, Jun 24, 2010 at 05:46:31PM -0400, cjt72 wrote:

> I'm making an nginx module to communicate with my database.
>
> I'm trying to use the C send() function in my handler, but it seems that
> whenever send() fails it kills the whole process as opposed to just
> returning a -1.
>
> In the error log I get "error: network error".
>
> Is there anyway I can suppress this behavior?

Calling C function send() never results in exit() on error, most
likely you call it somewhere yourself, either directly or via
something like BSD's err() wrapper. Try posting your code if in
doubt.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年6月24日星期四

send() function failing kills the worker process

Hi all,

I'm making an nginx module to communicate with my database.

I'm trying to use the C send() function in my handler, but it seems that
whenever send() fails it kills the whole process as opposed to just
returning a -1.

In the error log I get "error: network error".

Is there anyway I can suppress this behavior?

nginx version: nginx/0.8.39
built by gcc 4.2.1 (Apple Inc. build 5659)
configure arguments: --add-module=../nginx-gridfs --with-debug

nginx.conf:

user nobody;
worker_processes 1;

error_log /usr/local/nginx/log debug;

events {
worker_connections 1024;
}


http {
include mime.types;
default_type application/octet-stream;

sendfile on;

keepalive_timeout 65;

gzip on;

server {
listen 80;
server_name localhost;

location / {
root html;
index index.html index.htm;
}

error_page 404 /404.html;

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

location /gridfs/_id/ {
gridfs_db test;
}
}
}

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,102050,102050#msg-102050


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: My way to force cache expire

On 06/24/2010 05:43 PM, Piotr Sikora wrote:
> It does, but only because your backend doesn't set cache headers for
> logged-in users, otherwise all your users would see the same page.

I forgot to mention that I set X-Accel-Expires header to 0 for backend
response for logged in users.

--
Simone Fumagalli

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: My way to force cache expire

Hi,

> Everything works but now I'm looking for a way to delete cached pages when
> they are updated from the backend
> I thought I could make an http get from my backend with a particular
> cookie (backend_cookie) to the URL of the page I want to update.
> The backend is configured in a way so that HTTP requests pass by the proxy
> and are treated like external request.
>
> So, do you think this setup can work ?

Please check ngx_cache_purge module:
http://labs.frickle.com/nginx_ngx_cache_purge/

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: My way to force cache expire

Hi,

> It does. It's in production now and it works perfectly.
> I have doubts about the second configuration

It does, but only because your backend doesn't set cache headers for
logged-in users, otherwise all your users would see the same page.

You should really use "proxy_no_cache" for such things:
proxy_no_cache $cookie_my_app_cookie;

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

On 24/06/10 15:25, Igor Sysoev wrote:
> You can run "nginx -t" before applying configuration: it catches almost all
> possible errors except some fatal errors: no memory, files, etc.
> If you send -HUP signal to reconfigure and a new configuration is bad,
> then nginx continues to run with an old configuration, if no fatal
> errors will happen. SSL certificate without key case is not the fatal error.
>

I don't think I could have hoped for a better answer - thank you very much!

I note from the documentation that it is fairly simple to run multiple
instances of nginx behind a proxy to allow different virtual hosts to be
managed as different users (to prevent code on one site having
read/write access to other sites). Is this the best way to achieve this,
and if so how easy is it to set up? (Eg: do the startup scripts support
it, similar to how MySQL's mysqld_multi startup script do?)

It looks like I will be setting up a test server to see how I can
migrate my Apache configuration to ngigx. My existing virtual host is
using Ubuntu 8.04, but this has only nginx_0.6.35. Ubuntu 10.04 only has
nginx_0.7.65. I don't really want to roll my own (because I prefer to
have a repository that I can trust to keep on top of security updates).
So, what is the best way forward for me?

--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: My way to force cache expire

On 06/24/2010 05:18 PM, Ryan Malayter wrote:
> I don't think this configuration is doing what you want.

It does. It's in production now and it works perfectly.
I have doubts about the second configuration

--
Simone Fumagalli

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: My way to force cache expire

On Thu, Jun 24, 2010 at 10:04 AM, Simone fumagalli
<simone.fumagalli@contactlab.com> wrote:
> I've Nginx in front of my CMS that cache requests without cookie (anonymous visitors) while other requests (logged in users) are passed to the backend.
>
> The conf look like this (only relevant parts) :
>
> ---------------------------------------------------------------------
>
> proxy_cache_path /usr/local/www/cache/myapp/html levels=1:2 keys_zone=MYAPP_HTML_CACHE:10m inactive=30m max_size=2g;
>
> server {
>
>  server_name www.myapp.com
>  listen 111.222.333.444:80;
>
>  proxy_cache_key "$scheme://$host$request_uri";
>  proxy_cache_valid 200 20m;
>
>  proxy_redirect     off;
>  proxy_set_header   Host             $host;
>  proxy_set_header   X-Real-IP        $remote_addr;
>  proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
>
>  proxy_temp_path /usr/local/tmp;
>
>  location / {
>
>        # If logged in, don't cache.
>        if ($http_cookie ~* "my_app_cookie" ) {
>                set $do_not_cache 1;
>        }
>
>        proxy_cache_key "$scheme://$host$request_uri$do_not_cache";
>        proxy_cache MYAPP_HTML_CACHE;
>        proxy_pass http://ALL_backend;
>
>  }
>
> }
>

I don't think this configuration is doing what you want. I think all
logged-in users will get the same cached data, since the
proxy_cache_key will be the same for every logged in user.

One way might be to use something like the following (assuming
proxy_cache will accept a variable - I haven't tested):

location / {
# If logged in, don't cache.
set $mycache MYAPP_HTML_CACHE;
if ($http_cookie ~* "my_app_cookie" ) {
set $mycache off;
}
proxy_cache_key "$scheme://$host$request_uri$do_not_cache";
proxy_cache $mycache;
proxy_pass http://ALL_backend;

}

--
RPM

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

My way to force cache expire

I've Nginx in front of my CMS that cache requests without cookie (anonymous visitors) while other requests (logged in users) are passed to the backend.

The conf look like this (only relevant parts) :

---------------------------------------------------------------------

proxy_cache_path /usr/local/www/cache/myapp/html levels=1:2 keys_zone=MYAPP_HTML_CACHE:10m inactive=30m max_size=2g;

server {

server_name www.myapp.com
listen 111.222.333.444:80;

proxy_cache_key "$scheme://$host$request_uri";
proxy_cache_valid 200 20m;

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_temp_path /usr/local/tmp;

location / {

# If logged in, don't cache.
if ($http_cookie ~* "my_app_cookie" ) {
set $do_not_cache 1;
}

proxy_cache_key "$scheme://$host$request_uri$do_not_cache";
proxy_cache MYAPP_HTML_CACHE;
proxy_pass http://ALL_backend;

}

}

---------------------------------------------------------------------


Everything works but now I'm looking for a way to delete cached pages when they are updated from the backend
I thought I could make an http get from my backend with a particular cookie (backend_cookie) to the URL of the page I want to update.
The backend is configured in a way so that HTTP requests pass by the proxy and are treated like external request.

So, do you think this setup can work ?

I've added only the 3 line with # at the beginning

---------------------------------------------------------------------

proxy_cache_path /usr/local/www/cache/myapp/html levels=1:2 keys_zone=MYAPP_HTML_CACHE:10m inactive=30m max_size=2g;

server {

server_name www.myapp.com
listen 111.222.333.444:80;

proxy_cache_key "$scheme://$host$request_uri";
proxy_cache_valid 200 20m;

proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_temp_path /usr/local/tmp;

location / {

# Do not cache for logged user
if ($http_cookie ~* "my_app_cookie" ) {
set $do_not_cache 1;
}

# if ($http_cookie ~* "my_backend_cookie" ) {
# proxy_cache_valid 200 0m;
# }

proxy_cache_key "$scheme://$host$request_uri$do_not_cache";
proxy_cache MYAPP_HTML_CACHE;
proxy_pass http://ALL_backend;

}

}

---------------------------------------------------------------------

Let me know

--
Simone Fumagalli

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx startup scripts

On Thu, Jun 24, 2010 at 03:15:26PM +0100, Mark Rogers wrote:

> I'm considering nginx as an upgrade from Apache, on a virtual server
> with many virtual hosts. I have no nginx experience (yet!)
>
> I know that there are many reasons why nginx is likely better than
> Apache in my environment, but on the other hand "if it isn't broken,
> don't fix it". However, there is one aspect that I consider broken, so
> if nginx handles things differently that could be the reason to switch.
>
> With Apache, if the config file has an error in it, Apache will error
> and stop. The config test doesn't catch all possible errors (eg I had a
> situation where an SSL certificate was updated without the SSL key, and
> the config test showed no problems, but a config reload took the server
> offline for several minutes while the problem was resolved).
>
> How robust is nginx?
>
> To me, it seems simple: there should be startup scripts that start the
> server, and roll-back to a known-good config if the current config
> fails. But whatever the method, I'm looking for a server which has
> considered this and found a solution to it.
>
> Note: I am looking to use a distro package (to make maintenance easy) so
> I'm not really looking for custom scripts, although that's not being
> ruled out. Distro will likely be Ubuntu server.

You can run "nginx -t" before applying configuration: it catches almost all
possible errors except some fatal errors: no memory, files, etc.
If you send -HUP signal to reconfigure and a new configuration is bad,
then nginx continues to run with an old configuration, if no fatal
errors will happen. SSL certificate without key case is not the fatal error.


--
Igor Sysoev
http://sysoev.ru/en/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Nginx startup scripts

I'm considering nginx as an upgrade from Apache, on a virtual server
with many virtual hosts. I have no nginx experience (yet!)

I know that there are many reasons why nginx is likely better than
Apache in my environment, but on the other hand "if it isn't broken,
don't fix it". However, there is one aspect that I consider broken, so
if nginx handles things differently that could be the reason to switch.

With Apache, if the config file has an error in it, Apache will error
and stop. The config test doesn't catch all possible errors (eg I had a
situation where an SSL certificate was updated without the SSL key, and
the config test showed no problems, but a config reload took the server
offline for several minutes while the problem was resolved).

How robust is nginx?

To me, it seems simple: there should be startup scripts that start the
server, and roll-back to a known-good config if the current config
fails. But whatever the method, I'm looking for a server which has
considered this and found a solution to it.

Note: I am looking to use a distro package (to make maintenance easy) so
I'm not really looking for custom scripts, although that's not being
ruled out. Distro will likely be Ubuntu server.

--
Mark Rogers // More Solutions Ltd (Peterborough Office) // 0844 251 1450
Registered in England (0456 0902) @ 13 Clarke Rd, Milton Keynes, MK1 1LG


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Disable Cookies Caching?

Hi,

I've tested a while ago only to guest users.
The way i did it (I'm not using anymore) was creating a plugin in global
start with a similar code:

vbsetcookie('usercache', $vbulletin->userinfo['userid'], permanent);

Now you should have a new cookie name bbusercache to play with
proxy_cache_key like indicated before.

Note that forums are very dynamic and didn't made any tests.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,101466,101899#msg-101899


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Nginx and X-Accel-Redirect to serve Quicktime streaming prepared movies

Fernando Perez wrote:
>> I can't make it yet work with mp4 files (hmmm iphone)
> Argh! So stupid! There is the mp4 module for that!
Hmm it seems the module is made for Flashplayer.

Anyway I tested youtube's html5 feature (h264 videos) and we cannot
scrub the timeline outside of the downloaded zone, so I guess it's a
limitation of Quicktime 7.6.6. Maybe it works with QuicktimeX and
iphones, but as I don't have anything to test I can't confirm.
--
Posted via http://www.ruby-forum.com/.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年6月23日星期三

ngx_lua now has (basic) subrequest support

Hi, guys!

Last night's ngx_lua hackathon has been proven extremely fruitful.
chaoslawful and I didn't stop coding until midnight, and successfully
finished the first draft of the most tricky bit in ngx_lua, that is,
transparent non-blocking IO interface (or nginx subrequest interface)
on the Lua land.

The following test case is now passing:

  location /other {
      echo "hello, world";
  }

  # transparent non-blocking I/O in Lua
  location /lua {
      content_by_lua '
          local res = ngx.location.capture("/other")
          if res.status == 200 then
              ngx.echo(res.body)
          end';
  }

And on the client side:

   $ curl 'http://localhost/lua'
   hello, world

In the /other location, we can actually have drizzle_pass,
postgres_pass, memcached_pass,  proxy_pass, or any other content
handler configuration.

Here's a more amusing example to do "recursive subrequest":

 location /recur {
       content_by_lua '
           local num = tonumber(ngx.var.arg_num) or 0
           ngx.echo("num is: ", num, "\\n")

           if num > 0 then
               res = ngx.location.capture("/recur?num=" .. tostring(num - 1))
               ngx.echo("status=", res.status, " ")
               ngx.echo("body=", res.body)
           else
               ngx.echo("end\\n")
           end
           ';
   }

Here's the output on the client side:

    $ curl 'http://localhost/recur?num=3'
    num is: 3
    status=200 body=num is: 2
    status=200 body=num is: 1
    status=200 body=num is: 0
    end

You can checkout the git HEAD of ngx_lua to try out the examples above yourself:

http://github.com/chaoslawful/lua-nginx-module

So...time to replace our PHP code in the business with nginx.conf + Lua scripts!

We'd make the first public release of ngx_lua when its implementation
and API become solid enough ;)

Enjoy!
-agentzh

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

time related conditions

hi

we are looking for a way to change the nginx's behavior if certain
conditions related to time are met. like for example if one source ip
requests more than 10 times a minute we would like to redirect it to
somewhere else. or also if one certain cookie from a client gets
received more than x times per x seconds we would like to deliver an
alternative page. is there any way how nginx can do things like this?
if not i might start to write a module to provide this kind of
behavior, since i think it's probably not very hard to implement.

thanks,

mauro

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Bug: custom error_page doesn't work for HTTP 413 (content too large)

I just discovered that there was more to our stack than I originally
knew. So we have nginx instances layered and I didn't realize I was
hitting another before getting to the one I had configured. Let me
verify this but otherwise assume there's no problem. I'll report back if
the problem still exists, but I'm thinking not. Sorry for imposing my
confusions here.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2620,101752#msg-101752


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Disable Cookies Caching?

On Wed, Jun 23, 2010 at 12:55 AM, MuFei <mufei4u@gmail.com> wrote:
> Hi,
>
> I just set up Nginx as a proxy to Apache for my vBulletin based site.
> Everything went well till I enabled Nginx caching. Once Nginx caching
> is enabled I'm and all users are no longer unable to log-in to the
> forum due cookie cashing. When I disable caching we are all able to
> log-in as usual.
>
> Is there anyway to completely disable caching for cookies/sessions and
> keep caching for anything else?
>
> Here are my settings for Nginx caching:
>
>        log_format cache '***$time_local '
>                                                '$upstream_cache_status '
>                                                'Cache-Control: $upstream_http_cache_control '
>                                                'Expires: $upstream_http_expires '
>                                                '"$request" ($status) '
>                                                '"$http_user_agent" ';
>        access_log  /var/log/nginx/cache.access.log cache;
>        proxy_cache_path  /var/www/cache levels=1:2 keys_zone=my-cache:8m
> max_size=1000m inactive=7d;
>        proxy_temp_path /var/www/cache/tmp;
>        proxy_buffering on;
>        proxy_cache my-cache;
>        proxy_cache_valid  200 302 304 10m;
>        proxy_cache_valid  301 1h;
>        proxy_cache_valid  404 1m;
>        proxy_cache_valid  any 1m;
>
>
> Also although there are already some caching in the folder
> /var/www/cache, the cache.access.log is empty and has no records in
> it, any idea why is that?

It seems proxy_cache_key is set to default, meaning that the cookie is
not included. So everyone will see the same cached responses for GET
requests, whether they log in or not.

try something like:
proxy_cache_key "$host$request_uri$cookie_sessioncookie";
where "user" is replaced with whatever your session cookie is.

This still may not be usable, though, as you have things configured to
cache all pages for at least 1 minute. That means users will
potentially make a post and then not see any change in the forum
pages. If the forum pages need to be truly dynamic, caching for logged
in users might not be an option.

--
RPM

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx