2010年1月31日星期日

Re: IPv6 support

任晓磊 at 2010-2-1 15:41 wrote:
> Sorry for typo.
>
> I set "listen 80; listen [::]:80;" , got "[emerg]: bind() to [::]:80
> failed (98: Address already in use)"
>
> I set " listen [::]:80; listen 80;", got "[emerg]: bind() to
> 0.0.0.0:80 failed (98: Address already in use)
http://wiki.nginx.org/NginxHttpCoreModule#listen

"When Linux (in contrast to FreeBSD) binds IPv6 [::], it will also bind
the corresponding IPv4 address."

In your box, I think 'listen [::]:80;' is enough.

--
Weibin Yao


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

On Mon, Feb 1, 2010 at 2:41 PM, 任晓磊 <julyclyde@gmail.com> wrote:
> Sorry for typo.
>
> I set "listen  80; listen [::]:80;" , got "[emerg]: bind() to [::]:80
> failed (98: Address already in use)"
>
> I set " listen [::]:80; listen  80;", got "[emerg]: bind() to
> 0.0.0.0:80 failed (98: Address already in use)"
>
> In one word, the later listen directive fails.
>

oh right. I forgot about behaviour difference between *bsd/solaris and
linux wrt ipv6 bind


--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

Sorry for typo.

I set "listen 80; listen [::]:80;" , got "[emerg]: bind() to [::]:80
failed (98: Address already in use)"

I set " listen [::]:80; listen 80;", got "[emerg]: bind() to
0.0.0.0:80 failed (98: Address already in use)"

In one word, the later listen directive fails.

2010/2/1 任晓磊 <julyclyde@gmail.com>:
> If I don't specify "default ipv6only=on", I would get
>
> [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
> [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
> [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
> [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
> [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
> [emerg]: still could not bind()
>
> How's this going on?
> --
> Ren Xiaolei
>

--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

2010/2/1 Edho P Arief <edhoprima@gmail.com>:
> yes, you need to specify both.
>
> listen 80;
> listen [::]:80;
If I don't specify "default ipv6only=on", I would get

[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use)
[emerg]: still could not bind()

How's this going on?
--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

On Mon, Feb 1, 2010 at 2:31 PM, 任晓磊 <julyclyde@gmail.com> wrote:
> The default "listen 80" doesn't order nginx to listen on a IPv6
> address. I cannot understand this.
>
> At last, I use
> listen [::]:80 default ipv6only=on;
> listen 80;
> to order nginx serve on ipv4 and ipv6.
>
> A single "listen [::]:80;" makes it listen only on ipv6 address.
>

yes, you need to specify both.

listen 80;
listen [::]:80;

is enough
--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

The default "listen 80" doesn't order nginx to listen on a IPv6
address. I cannot understand this.

At last, I use
listen [::]:80 default ipv6only=on;
listen 80;
to order nginx serve on ipv4 and ipv6.

A single "listen [::]:80;" makes it listen only on ipv6 address.

2010/2/1 任晓磊 <julyclyde@gmail.com>:
> Thank you. I upgrade the deb package to 0.7.24 with ipv6 option. What
> configuration should I set in nginx.conf ?


--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

On Mon, Feb 1, 2010 at 2:18 PM, 任晓磊 <julyclyde@gmail.com> wrote:
> Thank you. I upgrade the deb package to 0.7.24 with ipv6 option. What
> configuration should I set in nginx.conf ?
>

listen [::]:80;

http://wiki.nginx.org/NginxHttpCoreModule#listen

--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

Thank you. I upgrade the deb package to 0.7.24 with ipv6 option. What
configuration should I set in nginx.conf ?

2010/2/1 Edho P Arief <edhoprima@gmail.com>:
> --with-ipv6
>
> and then set the appropriate configuration
>

--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: IPv6 support

On Mon, Feb 1, 2010 at 1:59 PM, 任晓磊 <julyclyde@gmail.com> wrote:
> I have a VPS that using tunnelbroker service provided by HE.net, and I
> can 'wget -6' to retrieve ipv6 sites' content. Someone told me that he
> cannot access my website served by nginx through IPv6 network, while
> he can access the 22/tcp SSH port, and can ping6 my VPS. Is there any
> compile option to enable IPv6 support?
> My version is 0.6.32-3+lenny3 provied by debian, and nginx -V results:
> ~# nginx  -V
> nginx version: nginx/0.6.32
> configure arguments: --conf-path=/etc/nginx/nginx.conf
> --error-log-path=/var/log/nginx/error.log
> --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock
> --http-log-path=/var/log/nginx/access.log
> --http-client-body-temp-path=/var/lib/nginx/body
> --http-proxy-temp-path=/var/lib/nginx/proxy
> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug
> --with-http_stub_status_module --with-http_flv_module
> --with-http_ssl_module --with-http_dav_module
>
>

--with-ipv6

and then set the appropriate configuration

--
O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: ngx_xss: Native support for cross-site scripting in an nginx

On Sat, Jan 30, 2010 at 4:05 AM, W-Mark Kubacki
<wmark+nginx@hurrikane.de> wrote:
> Therefore drizzle and rds_json module (btw, see my issue on Github)

I think I've fixed that compilation issue on x86_64 in the v0.04 release:

http://github.com/agentzh/rds-json-nginx-module/downloads

Could you please confirm the fix? Thanks for the report :)

> seem to me being the main parts. xss would cover the case where the
> blog's (2nd level) domain differs from the one to serve the JSON
> responses.

Yup, indeed :)

>
> Thanks for sharing!
>

You're very welcome :)

Cheers,
-agentzh

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

IPv6 support

I have a VPS that using tunnelbroker service provided by HE.net, and I
can 'wget -6' to retrieve ipv6 sites' content. Someone told me that he
cannot access my website served by nginx through IPv6 network, while
he can access the 22/tcp SSH port, and can ping6 my VPS. Is there any
compile option to enable IPv6 support?
My version is 0.6.32-3+lenny3 provied by debian, and nginx -V results:
~# nginx -V
nginx version: nginx/0.6.32
configure arguments: --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock
--http-log-path=/var/log/nginx/access.log
--http-client-body-temp-path=/var/lib/nginx/body
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi --with-debug
--with-http_stub_status_module --with-http_flv_module
--with-http_ssl_module --with-http_dav_module


--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

rewrite to lowercase?

can it be done? need case insensitivity on ubuntu

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,48527,48527#msg-48527


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx performance test

2010/1/30 yong xue <ultraice@gmail.com>:
> hi, sysoev,
>     for proxy, can nginx give an new option, for example
> client_max_body_size_in_buffer,  it will be served synchronously from
> client
> if client body size is greater than client_max_body_size_in_buffer ?
>

According to the current implementation, no easy way. ngx_proxy calls
the ngx_http_read_client_request_body function to do the content body
reading task for it, which always buffer the input request before
creating the request for the remote upstream server.

Even though technically speaking we *can* do that, but I'm afraid it
will make things even worse if the backend server blocks a thread or a
process on slow request processing (as in the Apache prefork mpm).

So I don't think Igor Sysoev will do that *big* refactoring for
something that often has little gain in real world ;)

Cheers,
-agentzh

QQ 279005114 *grin*

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: proxy_cache ramdisk

Could you please share us your configuration ?
 
thanks
NextHop

On Mon, Feb 1, 2010 at 11:05 AM, Ryan Malayter <malayter@gmail.com> wrote:
On Friday, January 29, 2010, AMP Admin <admin@ampprod.com> wrote:
> So I was thinking of creating a ramdisk and then pointing
> proxy_cache at the ramdisk… do you think that will be a good combo?

Works fine with the cache dir in /tmp on Ubuntu Linux. Tmpfs is a ram
disk solution.

> If so, to the people that use proxy_cache, how much space is
> it using on average so I can make it the right size?

Sizing totally depends on the sites and applications you are proxying.
As a start, look at your sites log files to see which files are hit
frequently , then add up their size.


--
RPM

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Question on Proxy_Cache_Path

does temp and cache use the same amount of space?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,1313,48485#msg-48485


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: proxy_cache ramdisk

On Friday, January 29, 2010, AMP Admin <admin@ampprod.com> wrote:
> So I was thinking of creating a ramdisk and then pointing
> proxy_cache at the ramdisk… do you think that will be a good combo?

Works fine with the cache dir in /tmp on Ubuntu Linux. Tmpfs is a ram
disk solution.

> If so, to the people that use proxy_cache, how much space is
> it using on average so I can make it the right size?

Sizing totally depends on the sites and applications you are proxying.
As a start, look at your sites log files to see which files are hit
frequently , then add up their size.


--
RPM

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: using nginx load balance. how to change the http header attribute sort order?

edit ngx_http_header_filter_module.c 
 
ngx_http_header_filter(ngx_http_request_t *r)



On Mon, Feb 1, 2010 at 09:44, 任晓磊 <julyclyde@gmail.com> wrote:
Sequence of headers does NOT make sense.

2010/1/31 Taixiang Shi <ealpha@gmail.com>:
> HI, all
>
>    i'm  use nginx load balance.
>
>    the response http header ,  The "Content-Length" is the last line on the
>
>    how to change the http header attribute sort order? like this ?



--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: using nginx load balance. how to change the http header attribute sort order?

Sequence of headers does NOT make sense.

2010/1/31 Taixiang Shi <ealpha@gmail.com>:
> HI, all
>
>    i'm  use nginx load balance.
>
>    the response http header ,  The "Content-Length" is the last line on the
>
>    how to change the http header attribute sort order? like this ?

--
Ren Xiaolei

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

RE: proxy_cache ramdisk

I’m able to cache php pages with the following but I can’t seem to cache static images with proxy_cache.

 

 

This works:

                                location ~ \.php$ {

                                                fastcgi_index                   index.php;

                                                fastcgi_pass                    127.0.0.1:9000;

                                                fastcgi_cache                   cachephp;

                                                fastcgi_cache_key               127.0.0.1:9000$request_uri;

                                                fastcgi_cache_valid             200  1h;

                                                include                         fastcgi_params;

                                                fastcgi_intercept_errors        On;

                                                fastcgi_ignore_client_abort     On;

                                                fastcgi_buffer_size             128k;

                                                fastcgi_buffers                 4 128k;

                                }

 

This does not work:

                                location ~* \.(jpg|jpeg|gif|css|png|js|ico|tif)$ {

                                                access_log                      off;

                                                expires                         30d;

                                                proxy_pass                                         http://127.0.0.1;

                                                proxy_cache_key                                            $scheme$host$request_uri

                                                proxy_cache                                      cachestatic;

                                                proxy_cache_valid                          200  1h;

                                                proxy_cache_valid                          404  5m;

                                                break;

                                }

 

Using:

                fastcgi_temp_path                         /etc/nginx/temp_cache;

                fastcgi_cache_path                         /etc/nginx/cache

                                                                                levels=1:2

                                                                                keys_zone=cachephp:10m

                                                                                inactive=7d

                                                                                max_size=128m;

 

                proxy_temp_path                                           /etc/nginx/temp_cache;

                proxy_cache_path                          /etc/nginx/cache

                                                                                levels=1:2

                                                                                keys_zone=cachestatic:10m

                                                                                inactive=7d

                                                                                max_size=128m;

Support for hexadecimal or octal ip format for the proxy_pass module

Hello,
I'd like the proxy_pass module of nginx to be able to support hexadecimal or octal ip formats.
For example here are the different versions of the ip of the nginx forum :
IP address: http://174.36.94.16
Hexadecimal: http://0xae245e10
Octal (dotted): http://0256.044.0136.020
Octal (undotted): http://2921618960

As you can see, they all work fine in any "civilized" browsers (firefox...etc).
Thanks in advance for your help.

Re: hotlink protection with rewrite

Samfingcul, I did it as other members here advised me. Try it...

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,39501,48098#msg-48098


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Protection against massiv requests from single server / ip

2010/1/31 <adk1601@gmx.de>:
>
> What are your setups against a lot of request from single servers?

For larger installations firewalls or properly configured routers
before any servers.

For tiny, home and experimental setups iptables [1] with rules such as:
-A INPUT -s 300.300.300.0/24 -j ACCEPT
-A INPUT -m recent --rcheck --seconds 120 --name ATTACKER --rsource -j DROP
-A INPUT -p tcp -m tcp --tcp-flags SYN,RST,ACK SYN -j syn-flood
-A syn-flood -m limit --limit 14/sec --limit-burst 30 -j RETURN
-A syn-flood -j LOG --log-prefix "Firewall: SYN-flood "
-A syn-flood -m recent --set --name ATTACKER --rsource
-A syn-flood -j DROP
... where lots of requests equal a syn-flood.
But beware, someone could exploit these rules by forging source
IPs (see source address validation [2]) and your server is still doing
work discarding these request packets, therefore could become
unresponsive if the request amount is very high (at least take a look
on syncookies [3]).

--
W-Mark Kubacki
http://mark.ossdl.de/

[1] http://www.netfilter.org/
[2] http://tools.ietf.org/wg/savi/
[3] http://en.wikipedia.org/wiki/SYN_cookies

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年1月30日星期六

Re: Protection against massiv requests from single server / ip

On 1/31/10 2:36 AM, adk1601@gmx.de wrote:
> Hello Nginx community,
>
> what is the best way protecting my nginx webserver against massiv request from single server/ips? I made some tests with openload and see one server with openload can fill the whole 100Mbit connection to my server.
>

http://wiki.nginx.org/NginxHttpLimitReqModule

http://wiki.nginx.org/NginxHttpLimitZoneModule

These should do the trick for you.


--
Jim Ohlstein

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Protection against massiv requests from single server / ip

Hello Nginx community,

what is the best way protecting my nginx webserver against massiv request from single server/ips? I made some tests with openload and see one server with openload can fill the whole 100Mbit connection to my server.

What are your setups against a lot of request from single servers?

Thanks for your help.

Kind regrads.
--
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/atbrowser

--
Jetzt kostenlos herunterladen: Internet Explorer 8 und Mozilla Firefox 3.5 -
sicherer, schneller und einfacher! http://portal.gmx.net/de/go/atbrowser

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

using nginx load balance. how to change the http header attribute sort order?

HI, all

   i'm  use nginx load balance. 
   
   the response http header ,  The "Content-Length" is the last line on the header , see the under :

HTTP/1.1 200 OK
Server: nginx/0.8.30
Date: Sun, 31 Jan 2010 04:20:08 GMT
Content-Type: text/html;charset=UTF-8
Connection: keep-alive 
Content-Length: 46

   how to change the http header attribute sort order? like this ?

HTTP/1.1 200 OK
Server: nginx/0.8.30
Connection: keep-alive 
Content-Length: 46
Date: Sun, 31 Jan 2010 04:20:08 GMT
Content-Type: text/html;charset=UTF-8

Re: Switching backends based on a cookie

Hi,

The only option then for sticky sessions is ip_hash, not cookies.

No, it's also possible to direct traffic to particular backend servers using cookies too.

In fact there are more ways of directing traffic to backends/clusters with Nginx than there are with HAProxy - in the sense of the number of ways of choosing a cluster (which could just be one server) - but AFAIK there are currently fewer ways of hashing / distributing over the servers in a particular cluster of backends with Nginx than HAProxy (even if you include the non-core modules).

If you did want high redundancy as well as sticky sessions, though, then you'd probably want to store your key application data in something like memcached and get your backend application to quiz that.

Marcus.

RE: if statement for Content-Length

Mainly I just don't want to cache them,

The site is hit by a massive amount of small files and some larger, due to the setup I can't separate them (customers are involved). So I want to be able to only cache files under say 5-10mb and just let everything else proxy_pass normally.

Kingsley


-----Original Message-----
From: Jérôme Loyet [mailto:jerome@loyet.net]
Sent: Sunday, 31 January 2010 1:56 AM
To: nginx@nginx.org
Subject: Re: if statement for Content-Length

2010/1/30 Kingsley Foreman <kingsley@internode.com.au>
>
> Hi Guys,
>
> I've been trying to work out if this can be done or not with not much luck
>
> I want to see if I can  do something like this
>
> If (Content-Length < 1024){
>        Return 403;
> }
>
> Im guessing this can't be done, however it would get around a caching issue I am having with large files.

what is the caching issue you're having with large files ?

>
> Kingsley
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Random SSL Handshake Errors

We're currently trying to get an nginx proxy connecting to an apache
backend with end-to-end SSL up and running.

Unfortunately we're randomly receiving 502 Bad Gateway errors from nginx
(I'd say about 10% of the time). We traced it back to a bad SSL
Handshake where the nginx server sends back a TLS alert 21 (Decrypt
Error) to the apache server.

Nginx is currently running version 0.8.29 with OpenSSL 0.9.8g, and the
apache back end is using apache 1.3.41 and OpenSSL 0.9.8k.

Any help would be greatly appreciated.

Thanks!
--
Posted via http://www.ruby-forum.com/.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Switching backends based on a cookie

2010/1/29 Marcus Clyne <ngx.eugaia@gmail.com>:
> Laurence Rowe wrote:
>>
>> I would take a look at HAProxy which has better support for this use
>> case, allowing for requests to be retried against another server if
>> their associated backend is down.
>>
>
> I would agree that if you're just wanting to do proxying, then HAProxy is
> probably a better way to go, however the above is also possible in Nginx
> using upstreams.

The only option then for sticky sessions is ip_hash, not cookies.

Laurence

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

RE: proxy_cache ramdisk

Is there any examples of a proxy_cache config?

 

I see the wiki but I would like to see some working example that people are using successfully.

 

Right now I’m just using php-fpm, nginx, and xCache.  Not sure how to get the most out of and utilize proxy_cahce.

Re: NginxHttpUploadProgressModule constantly hangs

UPDATE: May not be an nginx issue at all.

I did an experiment using a fixed X-Progress-ID so I could monitor the /progress url from another computer as I uploaded from my main computer.

Seems the hang up occurs within the AJAX request.
The ajax request fails every Xth request so the progress meter in the browser hangs.
However, if I just use a http meta refresh so that the javascript call is made anew every time, then I can get the progress.

Not sure what is hanging up the javascript when polling for progress in a loop, but I can work around it

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,47648,47739#msg-47739


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: hotlink protection with rewrite

@vicky007 - hey. can you please tell me how you done it ? thanks

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,39501,47724#msg-47724


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: if statement for Content-Length

2010/1/30 Kingsley Foreman <kingsley@internode.com.au>
>
> Hi Guys,
>
> I've been trying to work out if this can be done or not with not much luck
>
> I want to see if I can  do something like this
>
> If (Content-Length < 1024){
>        Return 403;
> }
>
> Im guessing this can't be done, however it would get around a caching issue I am having with large files.

what is the caching issue you're having with large files ?

>
> Kingsley
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: NginxHttpUploadProgressModule constantly hangs

PS: The upload does indeed finish successfully, but the progress bar gets stuck after 1.3 to 1.4 megs are uploaded

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,47648,47681#msg-47681


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

if statement for Content-Length

Hi Guys,

I've been trying to work out if this can be done or not with not much luck

I want to see if I can do something like this

If (Content-Length < 1024){
Return 403;
}

Im guessing this can't be done, however it would get around a caching issue I am having with large files.

Kingsley

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

NginxHttpUploadProgressModule constantly hangs

Hello,

Ive been trying to get this module to work for days now.
It works, sorta.

It consistently makes it about 1.3 megs and then freezes.
Ive looked at hundreds of nginx.conf file, have set every parameter known to man in that file and in php.ini.
Im away of just about every issue out there I think, at this point.

But the progress bar still freezes at around 1.3 megs. Its made it as far as 3 megs. Sometimes it freezes at 500K.

The other peculiar thing is, when the progress bar freezes, it still seems to be uploading.
Then if I click on a bookmark in firefox to go to google, the progress bar will make a sudden jump, correcting itself to its REAL position just before the page changes. Same freeze occurs in IE.

I even downgraded nginx to version 0.6.30, which was the latest tested with NginxHttpUploadProgressModule.
Still no difference. Im using fastcgi to PHP 5.3.1.

If have set all timeouts high, all max_client_body size type stuff BIG, etc.

Obviously, since it does work for a bit, most of the params have to be right. But it gets stuck.

I have run out of ideas, but need to upload 30 meg video files WITH a progress bar.
Otherwise I will have to switch to apache and I really dont want to do that.

Heeeeeelp... im pulling all my hair out after many days of searching google for clues and looking at hundres of nginx conf files.

Any ideas? Anything else I can try?

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,47648,47648#msg-47648


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx performance test

hi, sysoev,

    for proxy, can nginx give an new option, for example client_max_body_size_in_buffer,  it will be served synchronously from client 
if client body size is greater than client_max_body_size_in_buffer ?

2010/1/30 Dennis J. <dennisml@conversis.de>
Is there a page about performance optimizations in the wiki? If not I think it would be useful to create one so this and other performance related information can be collected there.

Regards,
 Dennis


On 01/29/2010 04:44 PM, 任晓磊 wrote:
Good!
One thing we must do is tuning. Your experience on temp files is useful for me.

2010/1/29 yong xue<ultraice@gmail.com>:
hi,


    last week, I did a nginx performance test.
    Yes, with no surprise, nginx is perfect.
   First, I proxyed 15 web hosts after nginx,  the cpu utilization and disk
IO were a little high,  this was cause by the file download and access log,
after I closed the access log, and changed the download to
synchronization by set proxy_max_temp_file_size to zero, nginx run with
little CPU consumption.
   So I turned more web hosts after nginx, the disk IO became some high
again, this was caused by uploading,  so I changed the client_body_tmp_path
to a tmpfs,
and disk IO disappear, and the bottleneck was the memory capacity.
    Finally with 50+ web host proxyed, the nginx host's CPU utilization is
about 30%, and the client_body_tmp_path occupied 4-6G, the peak throughput
of each network
adater was 400-450M.
    It is a good result. Thanks you, sysoev.


--
Best Regards,

peterxue

QQ:312200

e-mail:ultraice@gmail.com


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx







_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx



--
Best Regards,

薛 勇

QQ:312200

e-mail:ultraice@gmail.com
MSN:it@easy-boarding.com

Re: Nginx rewrite help: SEO/Permalink

I think is better idea to make all requests into php file

fastcgi_param SCRIPT_FILENAME /www/xxx/index.php;
   
and, parse url with PHP. Use $_SERVER['REQUEST_URI'] for take URL.

Sorry for my poore english

On Sat, Jan 30, 2010 at 12:22 PM, Harish Sundararaj <tuxtoti@gmail.com> wrote:
Hi All,

I need to implement a SEO/permalink related rewrite rule on nginx. I'm not sure how i would do it.
This is what I want:

I have a list which is something like this:

keyA : a1,a2
keyB : b1,b2
keyC : c1, c2
.
.
.
.
KeyZ : z1,z2

Now the access URL will be  like this: --->   http://example.com/results/myquery/keyB-keyD-keyK
This should translate to :--->  /results?q=myquery&keyvals=b1,b2,d1,d2,k1,k2
The point here to note is I will not know the number of "key" terms that'll be present in the URL . In the above example I have 3 terms (keyB,keyD and keyK) ..But there could be any number of them.

I have a couple of questions now:
1) I think i should use "set" to define the key list initially. Is this the right way? or should I be doing something else?
2) If I have a predefined number of keyterms in the URL i can use the $1,$2 ...to match it. But if I don't know the number of terms that'll be present what should i be doing?

It'll be great if someone can help me with this.

Regards
Harish



_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx




--
Pagarbiai.

Nginx rewrite help: SEO/Permalink

Hi All,

I need to implement a SEO/permalink related rewrite rule on nginx. I'm not sure how i would do it.
This is what I want:

I have a list which is something like this:

keyA : a1,a2
keyB : b1,b2
keyC : c1, c2
.
.
.
.
KeyZ : z1,z2

Now the access URL will be  like this: --->   http://example.com/results/myquery/keyB-keyD-keyK
This should translate to :--->  /results?q=myquery&keyvals=b1,b2,d1,d2,k1,k2
The point here to note is I will not know the number of "key" terms that'll be present in the URL . In the above example I have 3 terms (keyB,keyD and keyK) ..But there could be any number of them.

I have a couple of questions now:
1) I think i should use "set" to define the key list initially. Is this the right way? or should I be doing something else?
2) If I have a predefined number of keyterms in the URL i can use the $1,$2 ...to match it. But if I don't know the number of terms that'll be present what should i be doing?

It'll be great if someone can help me with this.

Regards
Harish


Re: Lighttpd to Nginx: environment variable and include_shell

Hello!

On Sat, Jan 30, 2010 at 03:22:20PM +0700, Huy Phan wrote:

> Hi all,
> I'm working on migrating my site from Lighttpd to Nginx.
> Currently in my lighty configuration, I can get environment
> variables using "env.<variable_name>", and also read configuration
> from output of a command by "include_shell".
> I wonder if we can do the same things in Nginx or not ?

Not.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Lighttpd to Nginx: environment variable and include_shell

Hi all,
I'm working on migrating my site from Lighttpd to Nginx.
Currently in my lighty configuration, I can get environment variables
using "env.<variable_name>", and also read configuration from output of
a command by "include_shell".
I wonder if we can do the same things in Nginx or not ?

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

2010年1月29日星期五

Re: panic: MUTEX_LOCK (22) [op.c:352].

On Fri, Jan 29, 2010 at 11:56 AM, Pavel Pragin
<Pavel.Pragin@solutionset.com> wrote:
> I am getting these in the Ngnix error log. The app freezes up an needs to be
> restarted. Please help.
>
>
>
> Error:
>
> panic: MUTEX_LOCK (22) [op.c:352].
>
> panic: MUTEX_LOCK (22) [op.c:352].
>
>
>
> Info:
>
> [root@upload1:/home/ppragin/nginx-0.7.64] /usr/local/nginx/sbin/nginx -V
>
> nginx version: nginx/0.7.64
>
> built by gcc 4.1.2 20071124 (Red Hat 4.1.2-42)
>
> configure arguments: --without-http_upstream_ip_hash_module
>
>
>
> Pavel Pragin
> solutionset
> P: 650.328.3900   F: 650.328.3901  M: 408.806.8621   275 Alma Street, Palo
> Alto, CA 94301
> ppragin@solutionset.com
>
> This message is intended for the addressee(s) only and may contain
> confidential or privileged
> information. Any use of this information by persons other than addressee(s)
> is prohibited. If you
> have received this message in error, please reply to the sender and delete
> or destroy all copies.
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>
>

As you were told in the channel:

$ pwd | tail -c 19 && find . | grep op\.c | wc -l
nginx/nginx-0.8.32
0

And a minute of searching tells me why:
http://forum.nginx.org/read.php?2,3811

Apparently it is a problem with perl being threaded. Either build
perl non-threaded or build nginx without perl.

-- Merlin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: fastcgi_cache_use_stale http_500 http_502 http_503 http_504

Sorry I meant 50x as opposed to 500 specifically. Either way there
are scenarios that can cause 50x errors when there is not an error in
your php.

On Sat, Jan 30, 2010 at 9:43 AM, Piotr Sikora <piotr.sikora@frickle.com> wrote:
>> Not necessarily.
>>
>> During traffic spikes mysql may become overloaded and when this
>> happens php-fpm can time out returning a 500 error.
>
> 500 and 503 are supported error codes, we were talking about 502 and 504.
>
> Best regards,
> Piotr Sikora < piotr.sikora@frickle.com >
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Switching backends based on a cookie

Laurence Rowe wrote:
> I would take a look at HAProxy which has better support for this use
> case, allowing for requests to be retried against another server if
> their associated backend is down.
>
I would agree that if you're just wanting to do proxying, then HAProxy
is probably a better way to go, however the above is also possible in
Nginx using upstreams.

Marcus.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: RE: How to solve the problem of "405 not allowed"?

Hm very bad to proxy, I manually applied the patch that were postet. And finally it will works.
One File I needed to edit manually, but know Igor's Patch is working.

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2414,47475#msg-47475


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: fastcgi_cache_use_stale http_500 http_502 http_503 http_504

> Not necessarily.
>
> During traffic spikes mysql may become overloaded and when this
> happens php-fpm can time out returning a 500 error.

500 and 503 are supported error codes, we were talking about 502 and 504.

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: fastcgi_cache_use_stale http_500 http_502 http_503 http_504

Thanks Merlin :)

On Sat, Jan 30, 2010 at 3:06 AM, merlin corey <merlincorey@dc949.org> wrote:
> On Thu, Jan 28, 2010 at 3:26 PM, Hone Watson <hone@codingstore.com> wrote:
>> I get an error with this:
>>
>> fastcgi_cache_use_stale error timeout invalid_header http_500 http_502
>> http_503 http_504;
>>
>> Are http_502 http_503 http_504 not yet available for fastcgi_cache_use_stale?
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://nginx.org/mailman/listinfo/nginx
>>
>
> You can use fastcgi_intercept_errors_on and then set an error_page for the 500s.
>
> -- Merlin
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: fastcgi_cache_use_stale http_500 http_502 http_503 http_504

On Fri, Jan 29, 2010 at 6:48 PM, Piotr Sikora <piotr.sikora@frickle.com> wrote:
>> it be broken ;)
>
> it must be broken*

Not necessarily.

During traffic spikes mysql may become overloaded and when this
happens php-fpm can time out returning a 500 error.

>
> Best regards,
> Piotr Sikora < piotr.sikora@frickle.com >
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: How to solve the problem of "405 not allowed"?

Hello!

On Fri, Jan 29, 2010 at 11:41:43AM -0500, kleinchris wrote:

> Can't edit my post...
> Here is a debug log, when i do it like this:
> error_page 405 =200 @405;
> location = @405 {
> root /var/www/vhosts/soulreafer;
> }

This will return internal 405 error page, as

1. you are serving request by static module again;

2. error_page to named location doesn't change request method.

Try this instead:

error_page 405 = $uri;

This way request method will be changed to GET, and the same uri
will be used to serve it.

> http://nopaste.info/5cf44ab3b1.html

This log doesn't shows any problems, and it's not even for POST
request. Instead it shows perfectly ok GET request (returning 304
not modified, as request includes If-Modified-Since).

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: How to solve the problem of "405 not allowed"?

Hello!

On Fri, Jan 29, 2010 at 01:30:16PM -0600, Nick Pearson wrote:

> I know 'if' is evil, and in general shouldn't be used inside a
> location block, but I needed this ability as well and have been using
> the following without any trouble for a couple years.
>
> upstream app_servers {
> server localhost:3000;
> }
>
> server {
>
> # set proxy settings here (not allowed in 'if')
> proxy_set_header X-Real-IP $remote_addr;
>
> location / {
> if ($request_method = POST) {
> proxy_pass http://app_servers;
> break;
> }
> try_files $uri @app;
> }
>
> location @app {
> proxy_pass http://app_servers;
> }
>
> }
>
> If anyone has any better ideas, I'd love to hear them. So far, I
> haven't been able to find any without having to patch the source.

The above configuration will work, but expect problems once you'll
add another if. I personally suggest something like:

location / {
error_page 405 = @app;
try_files $uri @app;
}

location @app {
proxy_pass http://app_servers;
}

As static module will return 405 for POST request this is
mostly identical to what you currently has (though it will also
pass to app servers other methods unknown to static module, e.g.
PUT).

> While we're on the topic, I know there's been talk of allowing POST
> requests to static files, but I don't remember a clear behavior being
> defined. When added to nginx, will this simply serve the static file
> as though a GET request was made? Ideally, one would be able to
> specify that POST requests should always be proxied to an upstream
> (which is what my config above does).
>
> Maybe something like this in the config:
>
> # handle just like a GET request
> allow_static_post on;
>
> # proxy to upstream
> allow_static_post proxy_pass http://app_servers;
>
> I don't use FCGI or PHP, so I'm not sure how the config would look for
> those, but you get the idea.

I see no problem using error_page to handle this.

Maxim Dounin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: ngx_xss: Native support for cross-site scripting in an nginx

2010/1/29 agentzh <agentzh@gmail.com>:
> On Fri, Jan 29, 2010 at 5:11 AM, Tobia Conforto
> <tobia.conforto@gmail.com> wrote:
>>
>> Am I the only one wondering what's the use of this module?
>
> The initial motivation of writing this module is to build a
> full-fledged blog app that is powered completely by nginx.conf and
> client-side JavaScript. I already have something runnable now. Here's
> the nginx.conf that I've got so far if you're interested:
>
>    http://agentzh.org/misc/nginx.conf

You can set "document.domain" in JS and then have a domain
blog.xyz.com, say static pages with header and footer, for example on
a CDN, and a domain api.xyz.com which does your actual magic.

Therefore drizzle and rds_json module (btw, see my issue on Github)
seem to me being the main parts. xss would cover the case where the
blog's (2nd level) domain differs from the one to serve the JSON
responses.

Thanks for sharing!

--
Mark

[1] http://wiki.nginx.org/Nginx3rdPartyModules#RDS_JSON_Module
[2] http://wiki.nginx.org/Nginx3rdPartyModules#Drizzle_Module

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: PHP-FPM and concurrency

Hey that was going to be my suggestion :)


Sent from my iPhone

On Jan 29, 2010, at 6:04 AM, Kiril Angov <kupokomapa@gmail.com> wrote:

> Right on! I switched to session in the database and no more
> problems. Thank you very much for your time!
>
> Regards,
> Kiril
>
> On Jan 28, 2010, at 4:02 PM, Patrick J. Walsh wrote:
>
>> If this is PHP and you are using sessions, I would guess that your
>> sessions are blocking. With sessions enabled, each PHP client has
>> a write lock on the sessions file and concurrent requests are
>> blocked to wait for the session to be available for an exclusive
>> lock. As soon as you are done making changes to a session, close
>> it for writing and other requests will be handled. See this page
>> for details:
>>
>> http://php.net/session_write_close
>>
>> ..Patrick
>>
>>
>>
>> On Jan 28, 2010, at 6:21 AM, Reinis Rozitis wrote:
>>
>>> Static files are most likely served instantly rather than keeping
>>> a connection hanging for a minute (to check something different
>>> than php you can try a perl script with just sleep(60); in it).
>>> You can also look if nginx gets the second request (if not then
>>> its still the browser problem and not webserver) just by checking
>>> the access and errorlog (in case there is some fastcgi backend
>>> timeout).
>>>
>>> Of course it might be a problem with php/fpm config. How many php
>>> childs do you spawn? Could it be possible that all childs are
>>> taken at the moment for processing your ~1min scripts?
>>>
>>> rr
>>>
>>>
>>> ----- Original Message ----- From: "Kiril Angov" <kupokomapa@gmail.com
>>> >
>>> To: <nginx@nginx.org>
>>> Sent: Thursday, January 28, 2010 3:00 PM
>>> Subject: Re: PHP-FPM and concurrency
>>>
>>>
>>>> Hello,
>>>>
>>>> thanks but for the reason of browser configuration, I checked to
>>>> see if I can open other resources from the same domain from the
>>>> same browser and it work for static files. Also, browser limits
>>>> would be per tab or at least 6 requests per second, not really 6
>>>> concurrent connections.
>>>>
>>>> Any other suggestions?
>>>>
>>>> On Jan 28, 2010, at 12:08 PM, Reinis Rozitis wrote:
>>>>
>>>>> It is probably more related to how many connections at max a
>>>>> single browser instance keeps open to a single hostname.
>>>>>
>>>>> For Firefox for example usually the default value is only 2.
>>>>> ( can search google for network.http.max-connections-per-server )
>>>>> IE has 6 at least (but seems you are not using that).
>>>>>
>>>>> Increase those and see if it helps.
>>>>>
>>>>> rr
>>>
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://nginx.org/mailman/listinfo/nginx
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://nginx.org/mailman/listinfo/nginx
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

panic: MUTEX_LOCK (22) [op.c:352].

I am getting these in the Ngnix error log. The app freezes up an needs to be restarted. Please help.

 

Error:

panic: MUTEX_LOCK (22) [op.c:352].

panic: MUTEX_LOCK (22) [op.c:352].

 

Info:

[root@upload1:/home/ppragin/nginx-0.7.64] /usr/local/nginx/sbin/nginx -V

nginx version: nginx/0.7.64

built by gcc 4.1.2 20071124 (Red Hat 4.1.2-42)

configure arguments: --without-http_upstream_ip_hash_module

 

Pavel Pragin
solutionset
P: 650.328.3900   F: 650.328.3901  M: 408.806.8621   275 Alma Street, Palo Alto, CA 94301
ppragin@solutionset.com

This message is intended for the addressee(s) only and may contain confidential or privileged
information. Any use of this information by persons other than addressee(s) is prohibited. If you
have received this message in error, please reply to the sender and delete or destroy all copies.

 

Re: ngx_xss: Native support for cross-site scripting in an nginx

agentzh wrote:
> The initial motivation of writing this module is to build a full-fledged blog app that is powered completely by nginx.conf and client-side JavaScript. I already have something runnable now.

Wow!
This sounds very cool.

>> Can't you do that
>> on the client side, if the response is to be parsed by some client-side javascript?
>
> This is the classic cross-site GET trick for JavaScript programmers.

I guess this is the part I'm not clear about... I usually just fetch stuff with jQuery and then process it on the client-side as I see fit. Also, looking up xss on Google only gives results about browser vulnerabilities.

Tobia
_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: How to solve the problem of "405 not allowed"?

I know 'if' is evil, and in general shouldn't be used inside a
location block, but I needed this ability as well and have been using
the following without any trouble for a couple years.

upstream app_servers {
server localhost:3000;
}

server {

# set proxy settings here (not allowed in 'if')
proxy_set_header X-Real-IP $remote_addr;

location / {
if ($request_method = POST) {
proxy_pass http://app_servers;
break;
}
try_files $uri @app;
}

location @app {
proxy_pass http://app_servers;
}

}

If anyone has any better ideas, I'd love to hear them. So far, I
haven't been able to find any without having to patch the source.

While we're on the topic, I know there's been talk of allowing POST
requests to static files, but I don't remember a clear behavior being
defined. When added to nginx, will this simply serve the static file
as though a GET request was made? Ideally, one would be able to
specify that POST requests should always be proxied to an upstream
(which is what my config above does).

Maybe something like this in the config:

# handle just like a GET request
allow_static_post on;

# proxy to upstream
allow_static_post proxy_pass http://app_servers;

I don't use FCGI or PHP, so I'm not sure how the config would look for
those, but you get the idea.

Nick

On Fri, Jan 29, 2010 at 10:41 AM, kleinchris <nginx-forum@nginx.us> wrote:
> Can't edit my post...
> Here is a debug log, when i do it like this:
> error_page 405 =200 @405;
>        location = @405 {
>                root /var/www/vhosts/soulreafer;
>        }
>
> http://nopaste.info/5cf44ab3b1.html
>
> nginx version: 0.8.32
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2414,47301#msg-47301
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Switching backends based on a cookie

I would take a look at HAProxy which has better support for this use
case, allowing for requests to be retried against another server if
their associated backend is down.

Laurence

2010/1/29 Marcus Clyne <ngx.eugaia@gmail.com>:
> Hi,
>
> saltyflorida wrote:
>>
>> Is it possible to switch backend clusters of servers based on a cookie?
>>
>> I would like to set a cookie named "env" and do something like this:
>>
>>        if ($http_cookie ~* "env=testing(;|$)") {
>>            proxy_pass http://backend_testing;
>>        }
>>        if ($http_cookie ~* "env=staging(;|$)") {
>>            proxy_pass http://backend_staging;
>>        }
>>        if ($http_cookie ~* "env=production(;|$)") {
>>            proxy_pass http://backend_production;
>>        }
>>
>> However the "proxy_pass" directive is not allowed inside an "if". Is there
>> another way I can approach this?
>>
>>
>
> Take a look at the map module :
>
> http://wiki.nginx.org/NginxHttpMapModule
>
> One possibility would be :
>
> http {
>
> map  $cookie_env  $backend {
>
>   testing      http://backend_testing;
>   staging      http://backend_staging;
>   production   http://backend_production;
> }
>
> server {
>   ...
>   proxy_pass   $backend;
>
> }
>
> }
>
> Marcus.
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Is this even possible? grab metadata from mp3 from the beginningof an upload!

> As the ID3 tag of an mp3 file is at the end

ID3v1 is located at the end of the file, ID3v2 is located at the beginning
of the file and it's been in use for over a decade now, so Saimon shouldn't
have problems with that. Just keep in mind that neither of tags is required.

Best regards,
Piotr Sikora < piotr.sikora@frickle.com >


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: php-cgi constantly recycles every couple of minutes

On Fri, Jan 29, 2010 at 9:04 AM, mindfrost82 <nginx-forum@nginx.us> wrote:
> I'm running Red Hat Enterprise using PHP 5.2.10 that was compiled using EasyApache (before the switch to nginx).
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,45516,47310#msg-47310
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

You could try updating PHP and whatever you're spawning it with.

-- Merlin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Switching backends based on a cookie

On Thu, Jan 28, 2010 at 4:49 PM, Marcus Clyne <ngx.eugaia@gmail.com> wrote:
> Hi,
>
> merlin corey wrote:
>>
>> Doesn't it make more sense to have production, static, and dev as
>> separate server blocks entirely with their own hostnames?  This is, at
>> the least, traditional :).
>>
>
> Yes, I would agree with this (and it should perform a little better too).
>
> Marcus.
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx
>

And the configuration will be simpler and easier to understand six
months from now :O without any ifs or rewrites ;). Also, the servers
can be moved to separate hardware (another tradition)!

> We are serving many domains with one server cluster and wanted to be able to test using the production domain names.

Use the power of NginX at your disposal! *TELL* Wordpress MU what the
domain name is. ;)

fastcgi_param SERVER_NAME myawesomeproductiondomain.com;

-- Merlin

_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: php-cgi constantly recycles every couple of minutes

I'm running Red Hat Enterprise using PHP 5.2.10 that was compiled using EasyApache (before the switch to nginx).

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,45516,47310#msg-47310


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: php-cgi constantly recycles every couple of minutes

Right now you have to compile PHP separately for PHP-FPM. Which distro are you using? Currently it has finally been merged into Core and may be released with php 5.3.2 or possible 5.3.3. You can look more at http://php-fpm.org/download/

On Jan 29, 2010, at 8:38 AM, mindfrost82 wrote:

> So I have tried a few more different things and php-cgi is still recycling way too often in my opinion.
>
> I tried the earlier suggestion of using 16 children (instead of 4) with 4096 max requests. It does appear that the max requests is getting ignored completely, as Rob said. With 16 children, the site is MUCH faster, but php-cgi is getting recycled about every minute still. The reason I think its getting ignored is because I can set it to 1 million and it will still recycle just as often....unless there's something else causing this.
>
> I also tried changing from TCP to a socket with the same result. Which do you guys suggest anyway?
>
> Whenever php-cgi recycles itself, while using TCP, I'll get these in the nginx logs:
> recv() failed (104: Connection reset by peer) while reading response header from upstream
> connect() failed (111: Connection refused) while connecting to upstream
>
> I might just have to switch to php-fpm, but if there's another underlying cause here, I'm not sure if it'll matter. Do you still have to recompile PHP to get php-fpm to work, or is there an easier way?
>
> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,45516,47247#msg-47247
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: nginx performance test

Is there a page about performance optimizations in the wiki? If not I think
it would be useful to create one so this and other performance related
information can be collected there.

Regards,
Dennis

On 01/29/2010 04:44 PM, 任晓磊 wrote:
> Good!
> One thing we must do is tuning. Your experience on temp files is useful for me.
>
> 2010/1/29 yong xue<ultraice@gmail.com>:
>> hi,
>>
>>
>> last week, I did a nginx performance test.
>> Yes, with no surprise, nginx is perfect.
>> First, I proxyed 15 web hosts after nginx, the cpu utilization and disk
>> IO were a little high, this was cause by the file download and access log,
>> after I closed the access log, and changed the download to
>> synchronization by set proxy_max_temp_file_size to zero, nginx run with
>> little CPU consumption.
>> So I turned more web hosts after nginx, the disk IO became some high
>> again, this was caused by uploading, so I changed the client_body_tmp_path
>> to a tmpfs,
>> and disk IO disappear, and the bottleneck was the memory capacity.
>> Finally with 50+ web host proxyed, the nginx host's CPU utilization is
>> about 30%, and the client_body_tmp_path occupied 4-6G, the peak throughput
>> of each network
>> adater was 400-450M.
>> It is a good result. Thanks you, sysoev.
>>
>>
>> --
>> Best Regards,
>>
>> peterxue
>>
>> QQ:312200
>>
>> e-mail:ultraice@gmail.com
>>
>>
>> _______________________________________________
>> nginx mailing list
>> nginx@nginx.org
>> http://nginx.org/mailman/listinfo/nginx
>>
>>
>
>
>


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

proxy_cache ramdisk

So I was thinking of creating a ramdisk and then pointing proxy_cache at the ramdisk… do you think that will be a good combo?

 

If so, to the people that use proxy_cache, how much space is it using on average so I can make it the right size?

 

Re: How to solve the problem of "405 not allowed"?

Can't edit my post...
Here is a debug log, when i do it like this:
error_page 405 =200 @405;
location = @405 {
root /var/www/vhosts/soulreafer;
}

http://nopaste.info/5cf44ab3b1.html

nginx version: 0.8.32

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2414,47301#msg-47301


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: RE: How to solve the problem of "405 not allowed"?

Is there a fix that I can POST on static files in nginx 0.8.32? I need this..

Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2414,47294#msg-47294


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx

Re: Switching backends based on a cookie

saltyflorida wrote:
> saltyflorida Wrote:
> -------------------------------------------------------
>
>> Eugaia Wrote:
>> --------------------------------------------------
>> -----
>>
>>> saltyflorida wrote:
>>>
>>>> I forgot to mention that I am using caching
>>>>
>> with
>>
>>> the HTTP Proxy module and that I only want to
>>> cache responses from the production servers.
>>>
>> When
>>
>>> I have the cookie set to "testing" or
>>>
>> "staging",
>>
>>> I'd like to bypass the cache and talk directly
>>>
>> to
>>
>>> the backend. Does this sound feasible?
>>>
>>>>
>>>>
>>> Sure. Do a rewrite using your $backend
>>>
>> variable
>>
>>> under the 'location /'
>>> block to one of three other blocks, which have
>>>
>> the
>>
>>> different definitions
>>> of your proxy_pass, proxy_cache_valid...
>>>
>>> e.g.
>>>
>>> map $cookie_ $backend {
>>>
>>> default production;
>>> test test;
>>> ...
>>> }
>>>
>>> location / {
>>> rewrite ^(.*)$ /$backend/$1;
>>> }
>>>
>>> location /production/ {
>>> proxy_pass
>>> http://backend_production;
>>> proxy_cache_valid ...
>>> }
>>>
>>> location /test/ {
>>> proxy_pass
>>> # no proxy_cache_valid
>>> ...
>>> }
>>>
>>> Note, you'll need some way to catch the case of
>>>
>> no
>>
>>> cookie variable, so
>>> it's unwise to put $cookie_ directly in the
>>> rewrite result (you'll
>>> get an infinite loop on such results).
>>>
>>> Marcus.
>>>
>>> _______________________________________________
>>> nginx mailing list
>>> nginx@nginx.org
>>> http://nginx.org/mailman/listinfo/nginx
>>>
>> Marcus,
>> Thank you for your help. I had wondered if I could
>> use a rewrite, but I don't
>> understand how this works. I tried to implement
>> your suggestion, but I am
>> being redirected to /testing/ or /production/.
>> These show up as part of the
>> URL in the browser. Also, trying to visit pages
>> other than the root return a
>> 404 error. Here is my configuration. Can you point
>> out what I'm doing wrong?
>>
>> http {
>> upstream backend_testing {
>> ip_hash;
>> server ...
>> }
>> upstream backend_staging {
>> ip_hash;
>> server ...
>> }
>> upstream backend_production {
>> ip_hash;
>> server ...
>> }
>> proxy_cache_path /mnt/nginx_cache levels=1:2
>> keys_zone=one:100m
>> inactive=7d max_size=10g;
>> proxy_temp_path /var/www/nginx_temp;
>>
>> map $cookie_uslnn_env $backend {
>> default http://backend_production;
>> testing http://backend_testing;
>> staging http://backend_staging;
>> production http://backend_production;
>> }
>>
>> server {
>> location / {
>> rewrite ^(.*)$ /$backend/$1;
>> }
>> location /testing/ {
>> proxy_pass http://backend_testing;
>> }
>> location /staging/ {
>> proxy_pass http://backend_staging;
>> }
>> location /production/ {
>> proxy_pass http://backend_production;
>> proxy_cache one;
>> proxy_cache_key $my_cache_key;
>> proxy_cache_valid 200 302 304 10m;
>> proxy_cache_valid 301 1h;
>> proxy_cache_valid any 1m;
>> proxy_cache_use_stale updating error
>> timeout invalid_header http_500 http_502 http_503
>> http_504;
>> }
>> location /wp-admin {
>> proxy_pass http://backend_production;
>> proxy_read_timeout 300;
>> }
>> }
>> }
>>
>> Thanks,
>> Eliot
>>
>
> Correction:
> The configuration I tried looks like this:
>
> http {
> upstream backend_testing {
> ip_hash;
> server ...
> }
> upstream backend_staging {
> ip_hash;
> server ...
> }
> upstream backend_production {
> ip_hash;
> server ...
> }
> proxy_cache_path /mnt/nginx_cache levels=1:2
> keys_zone=one:100m
> inactive=7d max_size=10g;
> proxy_temp_path /var/www/nginx_temp;
>
> map $cookie_uslnn_env $backend {
> default production;
> production production;
> testing testing;
> staging staging;
> }
>
> server {
> location / {
> rewrite ^(.*)$ /$backend/$1;
> }
> location /testing/ {
> proxy_pass http://backend_testing;
> }
> location /staging/ {
> proxy_pass http://backend_staging;
> }
> location /production/ {
> proxy_pass http://backend_production;
> proxy_cache one;
> proxy_cache_key $my_cache_key;
> proxy_cache_valid 200 302 304 10m;
> proxy_cache_valid 301 1h;
> proxy_cache_valid any 1m;
> proxy_cache_use_stale updating error timeout invalid_header http_500 http_502 http_503 http_504;
> }
> location /wp-admin {
> proxy_pass http://backend_production;
> proxy_read_timeout 300;
> }
> }
> }
>
Sorry, my fault. That should have read 'proxy_pass
htttp://backend_production/;'. The final slash 'deletes' the first part
of the location that's passed.

Note that you will want to add the slash for the /production/,
/testing/... blocks, but not for the /wp-admin block.

Marcus.


_______________________________________________
nginx mailing list
nginx@nginx.org
http://nginx.org/mailman/listinfo/nginx