Giter Site home page Giter Site logo

httpd's People

Contributors

hrkfdn avatar jaminh avatar lpereira avatar reyk avatar semarie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

httpd's Issues

Downloading large files hogs the memory

Hey,

I have a VPS with 1G of RAM and 2GB of swap running OpenBSD 5.7 stable and httpd compiled from CURRENT.

When I download a file of 1.8G size, httpd slowly takes up all the main memory and later on the swap is utilized, at which point the transfer slowly starts to stall. Eventually the system becomes unresponsive to the point where only a reboot helps. This is reproducible.

I'm assuming httpd streams the file in some way, but maybe the buffering is too aggressive?

-- Henrik

Choice of name

I'm curious about the choice of name for httpd.

From the perspective of searching for documentation about this application, it would have been better if it had a very distinctive name. I'm already getting confused and lost when searching google because there appear to be a number of web servers with the same or similar names.

If it was named "puffer" or something it would have been much easier to filter out which web server is actually the topic in question.

Prioritize ciphers

In some situations it is more secure or faster to let the server choose the cipher rather than to just pick the first supported cipher from the list provided by the client. I suggest to add an option like nginx's ssl_prefer_server_ciphers / apache's SSLHonorCipherOrder or maybe even just make it default behaviour.

This unfortunately requires a change to libtls to allow it to set the SSL_OP_CIPHER_SERVER_PREFERENCE option:
SSL_CTX_set_options(ctx->ssl_ctx, SSL_OP_CIPHER_SERVER_PREFERENCE);

I simply added the line above to lib/libtls/tls_server.c:tls_configure_server() to make it default on my test system.

TLS clause reordering bug

This configuration works:

# /etc/httpd.conf

domain="yegortimoschenko.com"
ext_addr="egress"

server $domain {
        listen on $ext_addr port 80
        block return 301 "https://$domain$REQUEST_URI"
}

server $domain {
        listen on $ext_addr port 8080
        listen on $ext_addr tls port 443

        location "/downloads/*" {
                directory auto index
        }
}

$ sudo httpd -d
startup

If I reorder listen on $ext_addr port 8080 and listen on $ext_addr tls port 443 clauses in the second directive, it doesn't start anymore. It segfaults instead:

# /etc/httpd.conf

domain="yegortimoschenko.com"
ext_addr="egress"

server $domain {
        listen on $ext_addr port 80
        block return 301 "https://$domain$REQUEST_URI"
}

server $domain {
        listen on $ext_addr tls port 443
        listen on $ext_addr port 8080

        location "/downloads/*" {
                directory auto index
        }
}

$ sudo httpd -d
startup
logger exiting, pid 10917
Segmentation fault (core dumped)
$ server exiting, pid 13664
server exiting, pid 12732
server exiting, pid 15152

The version I'm using is bundled with OpenBSD 5.7 release.

POST requests not working in FastCGI - with potential fix

When a POST request is sent to httpd, none of the Content-X headers are being sent through FastCGI, and I'm not sure that the content itself is sent either. This effectively eliminates the possibility of using form submissions, which are a very important part of any web application.

log options in httpd.conf

From [email protected]:

Struggling with the behavior of the log options in httpd.conf on 5.6-stable.

I'm trying to get different virtual domains to log to their own files but no
matter what option I've tried after reading the man page I get odd results.

Using the configuration below, ALL access gets logged to the default access.log,
even the ones from the other servers listed. In the specific domain-access.log
files I get only the errors and nothing in the domain-error.log files.

Can anyone look at the config below and help me understand why it might be
logging that way and how to fix it?

Cheers,

-Clint

Example config on misc@

charsets

Support configurable charset for directory auto index and files.

slowcgi(8) doesn't clean up /run/slowcgi.sock (5.7-current MP#899)

Not sure if this is the right repo to open this, but in the latest 5.7-current, MP#899, it seems that slowcgi is not cleaning up the sock file after the process is stopped.

root@j5:/var/www/run # /etc/rc.d/slowcgi start
slowcgi(ok)
root@j5:/var/www/run # ls -la
total 8
drwxr-xr-x   2 root  daemon  512 Apr  1 14:37 .
drwxr-xr-x  13 root  daemon  512 Mar 31 04:40 ..
srw-rw-rw-   1 root  daemon    0 Apr  1 14:23 bgpd.rsock
srw-rw----   1 www   www       0 Apr  1 14:36 php-fpm.sock
srw-rw----   1 www   www       0 Apr  1 14:37 slowcgi.sock
root@j5:/var/www/run # /etc/rc.d/slowcgi stop
slowcgi(ok)
root@j5:/var/www/run # ls -al
total 8
drwxr-xr-x   2 root  daemon  512 Apr  1 14:37 .
drwxr-xr-x  13 root  daemon  512 Mar 31 04:40 ..
srw-rw-rw-   1 root  daemon    0 Apr  1 14:23 bgpd.rsock
srw-rw----   1 www   www       0 Apr  1 14:36 php-fpm.sock
srw-rw----   1 www   www       0 Apr  1 14:37 slowcgi.sock

I found this while trying to figure out why calling /cgi-bin/bgplg wasn't working. I am getting the following error when slowcgi is running.

default <ip> - - [01/Apr/2015:14:41:41 +0200] "GET /cgi-bin/bgplg HTTP/1.1" 500 0
server default, client 1 (1 active), <ip>:50829 -> <ip>, empty stdout (500 Internal Server Error)

Using the following httpd.conf:

server "default" {
        listen on egress port 80
        location "/cgi-bin/*" {
                fastcgi
                root "/"
        }
        location "*.php" {
                fastcgi socket "/run/php-fpm.sock"
        }
        root "/htdocs/www"
        directory { no auto index }
}

rc.conf.local

bgpd_flags=""
httpd_flags=""
slowcgi_flags=""
pkg_scripts="php_fpm"

Let me know if more information or tests are needed.

http auth

We don't have a satisfying implementation for authentication yet. But it is needed and will be done. basic auth.

SCGI

From http://fossil-scm.org/xfer/doc/trunk/www/server.wiki:


There are basically four ways to set up a Fossil server:

  • A stand-alone server
  • Using inetd or xinetd or stunnel
  • CGI
  • SCGI (a.k.a. SimpleCGI)
    Each of these can serve either a single repository, or a directory hierarchy containing many repositories with names ending in ".fossil".

So should httpd support SCGI?

server aliases

Server aliases and a few restrictions of the grammar: Individual
server blocks can currently only have one name and listen statement.
This will be fixed in the parser later. To avoid too much repeating
configuration, I currently use includes:

    server "www.example.com" {
            listen on $ip4_addr port 80
            include "/etc/httpd/example.com.inc"
    }
    server "www.example.com" {
            listen on $ip6_addr port 80
            include "/etc/httpd/example.com.inc"
    }
    server "www.example.com" {
            listen on $ip4_addr tls port 443
            include "/etc/httpd/example.com.ssl"
            include "/etc/httpd/example.com.inc"
    }
    server "www.example.com" {
            listen on $ip6_addr tls port 443
            include "/etc/httpd/example.com.ssl"
            include "/etc/httpd/example.com.inc"
    }

missing config variables

its a future request (httpd is much better then nginx - so far) but missing few important things like config variables http://nginx.org/en/docs/varindex.html (i have few very complicated config in nginx and want to move this to new httpd - that will make my life easier :D and i think not only my).
missing default fastcgi_params.

root strip

Finish httpd URI stripping by Christopher Zimmermann.

    location "/download/*" {
            root { strip 1, "/htdocs/pub" }
    }

Issues with example ownCloud config

Hi,
I recently wrote a blog post about configuring a -CURRENT OpenBSD system with an ownCloud instance. Your wiki page was a great resource for it but two issues were uncovered while testing the configuration.

First issue detected by Bryan Steele on twitter is an outdated workaround for /forbidden which can now be dropped in favour of the newly introduced 'block' rule.

The second issue is a bit more serious. The original nginx rule for blocking was:

    location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
            deny all;
    }

This rule can be translated as: anything starting with

  • data
  • config
  • .ht
  • db_structure.xml
  • README

should be denied access with a 403.

The corresponding example config for 'data' and 'config' blocks are incorrect:

    location "*/data" {
            root "/forbidden"
    }
    location "*/config" {
            root "/forbidden"
    }

They still allow access by direct name to files in data or config folder for example:

http://localhost/data/owncloud.db

which results in a file download. This issue is also detected by ownCloud self test on the admin page but only when running on port 80 (plain HTTP).

The issue can be remedied by altering the data and config rules to:

  • "/data"
  • "/config"

which correctly blocks downloads from both folders.

Support for multiple "index" files

Hello,

would it be possible to extend functionality of "directory index " to allow us to specify multiple index file names? It might be very useful while serving both dynamic & static pages from within one server.

Thank you

block/deny

We cannot deny access to specific locations but the current workaround
is to set a non-accessible root:

    location "*/.*" {
            # mkdir -m 0 /var/www/forbidden
            root "/forbidden"
    }

MOVE method not allowed

I am getting a 405 Method not allowed error when trying to use the WebDAV MOVE method. It appears it was not included in the switch statement handling the HTTP methods in server_http.c (line 379).

httpd + php_fpm configuration

Hi there,

Would you mind to provide me a quick help to set up php_fpm + httpd working here? So..

This is my /etc/httpd.conf:

server "example.org" {
        listen on * port 80
        directory { no index, index app.php }
        location "*.php" {
                fastcgi socket "/var/www/run/php-fpm.sock"
                log style combined
        }
        root "/var/www/htdocs/example.org/web"
}
types {
        include "/usr/share/misc/mime.types"
}

Checking the configuration:

$ sudo httpd -n       
configuration OK

Checking php_fpm status:

$ sudo /etc/rc.d/php_fpm check  
php_fpm(ok)

When I hit example.org I'm getting:
404 Not Found

When I hit example.org/app.php I'm getting:
500 Internal Server Error

httpd/slowcgi FastCGI parameter mismatch

It seems to me that httpd sets the FastCGI variables in a way that slowcgi does not expect. Specifically, httpd will set SCRIPT_NAME and SCRIPT_FILENAME to the document root (typically a directory), whilst slowcgi will take these variables to indicate the CGI script to run. Now, as a hack, it is possible to point root at a specific CGI script in the httpd configuration file, but if you want to run e.g. all .sh files with slowcgi, you are in trouble. For example, this configuration should work:

server "default" {
  listen on eggsml.dk port 4242
  root "/htdocs/eggsml.dk"

  location "*.sh" {
    fastcgi socket "/run/eggsmlcgi.sock"
    root "/cgiroot"
  }
}

With httpd running in a chroot in /var/www, and requesting eggsml.dk:4242/index.sh, this will set both SCRIPT_NAME and SCRIPT_FILENAME to /var/www/htdocs/eggsml.dk, which slowcgi will try to execute (and fail). Note that slowcgi is not running in a chroot (or at least not in /var/www) - it is in essence used to punch a hole in the httpd chroot. If I do not have the root stansa in the location field, SCRIPT_NAME and SCRIPT_FILENAME will be set more sensibly to /htdocs/eggsml.dk/index.sh, that is, it will include the requested URL.

Support disabling client-side SSL renegotiation

Letting clients initiate SSL renegotiation is a know vector for CPU-based DoS attacks.

relayd has configruation syntax and code to remediate this problem, by allowing adminstrators to disable client-initiated renegotiation entirely (see patch against relayd here http://article.gmane.org/gmane.os.openbsd.tech/37341):

- ssl [no] client-renegotiation
  -> allows the interception of ("secure") client initiated
     renegotioations, which are considered a risk in DDoS scenarios
     because many CPU cycles can be burned this way on a single TCP
     connection without an obvious way for the administrator to 
     immediately know what's happening.

httpd should ideally support the same feature to mitigate against this type of attack. Perhaps renegotiation should be off by default, as applications using client-side certificates for authentication are a minority.

regex

Could you support regex for matching and rewriting?

HTTPS multiple listen directive problem with FastCGI

When a server is configured with 2 listen options, one for port 80 and the other for tls port 443, there is a problem with the HTTPS=on flag that goes through FastCGI.

Using the following configuration:

listen on $ext_ip tls port 443
listen on $ext_ip port 80

The HTTPS=on flag is sent through FastCGI regardless of whether you connect to the server via HTTP or HTTPS.

Using the following configuration (reordered from above):

listen on $ext_ip port 80
listen on $ext_ip tls port 443

The HTTPS=on flag is never sent through FastCGI, even if you connect using HTTPS.

I suspect the HTTPS=on flag is being set based on what is first in the configuration instead of the connection type.

Rewrites

Have rewrites been considered? Is it something that devs would consider "featuritis"?

return

redirects / return 301 etc.: This can be done without regex by using a few built-in variables. Current workaround is to either do it in the fastcgi backend or with, ahem, html refresh. btw., nginx' "return 444;" is such an ugly workaround...

httpd directory listing not properly encoded.

From [email protected]:

I think that the directory listing generated by httpd doesn't properly encode
strings that are taken from C-variables. (function "server_file_index" in
server_file.c)

e.g., filenames with spaces, or odd characters, produce non-functioning links.

I used the following directory structure to test:

    mkdir test
    echo "TEST" > test/test.txt
    echo "Test" > test/.hidden.txt
    echo "test ONCE more" > test/"test once more".txt
    mkdir test/"<b> \\test\"&amp;\"&"
    echo "&amp; test" > test/"<b> \\test\"&amp;\"&"/"&lt;b&gt;".txt

What I see:

  • the file test/"test once more".txt cannot be displayed because is has spaces
    in its name. (doesn't appear to be a problem on CURRENT though)
  • the directory test/" \test"&"&" cannot be displayed either and
    will also cause subsequent characters to be printed in bold because of the
    character sequence.
  • the filename part of test/" \test"& "&"/"<b>".txt is possibly
    rendered as .txt.

AFAICT tell the C-variables need to be "HTML encoded" or both "URI percent
encoded" and "HTML encoded" in case of URIs in hrefs.

I spotted the url_decode function but i don't immediately see any encoding
functions so I'm wondering do encoding functions exist ?

Just in case it's of interrest, I ripped an "URI percent encode" and a "HTML
encode" function from some hobby project of mine and used it to encode the C-
variables. This appears to solve the issues described. Though I don't know for
sure my code is anywhere near perfect I can produce a diff if that's
appreciated.

FAQ

The web server needs some more FAQ-style documentation in addition to
our excellent man pages and examples. Examples for each CMS would go
beyond the scope of them, and probably don't fit into the OpenBSD FAQ.
So I'm thinking about putting something on http://bsd.plumbing/. Or Github?

http/2 support planned?

Is support for http/2 planned for httpd? The adoption with browsers seem to be rather quick, so there could be a fast transition to leave HTTP/1.1 as legacy soon.

Unneeded escaping in $REQUEST_URI variable

I want to serve my site from https://example.com and redirect other requests to this location. I made the following httpd.conf:

server "default" {
        listen on 1.2.3.4 port 80
        block return 301 "https://vbezhenar.com$REQUEST_URI"
}

server "vbezhenar.com" {
        listen on 1.2.3.4 tls port 443
        tls certificate "/etc/ssl/example.com.crt"
        tls key "/etc/ssl/private/example.com.key"
}

now I'm issuing request to the url http://example.com/path?a=b&c=d :

GET /path?a=b&c=d HTTP/1.1
Host: example.com

and receiving redirect to https://example.com/path%3Fa=b%26c=d

HTTP/1.0 301 Moved Permanently
Date: Sat, 26 Sep 2015 15:12:44 GMT
Server: OpenBSD httpd
Connection: close
Content-Type: text/html
Content-Length: 374
Location: https://example.com/path%3Fa=b%26c=d

This is wrong redirect: it URI-encodes "?" and "&" corrupting initial request.

There should be some way to preserve request URI.

authentication with client-certificates

  1. Is there support for client-side certificates on a per-location
    basis? Would be a good alterrnative to improve security accessing
    administrative parts of a website, not relying solely on password
    authentication.

httpd ENOTDIR

On Tue, Jul 15, 2014 at 07:46:08PM +0200, Stefan Sperling wrote:

If I append path components to a file that exists in the htdocs dir
(e.g. http://server.example.com/index.html/moo/moo) then I see
a 500 error. I would expect 404.

The error page itself works just fine in either case :)

Fix:

Index: server_file.c
===================================================================
RCS file: /cvs/src/usr.sbin/httpd/server_file.c,v 
retrieving revision 1.3
diff -u -p -r1.3 server_file.c
--- server_file.c     13 Jul 2014 15:07:50 -0000      1.3
+++ server_file.c     15 Jul 2014 17:43:22 -0000
@@ -78,6 +78,7 @@ server_response(struct httpd *env, struc
             case EACCES:
                     server_abort_http(clt, 403, path);
                     break;    
+            case ENOTDIR:
             case ENOENT:
                     server_abort_http(clt, 404, path);
                     break;

php_admin_value

Is feasible set php_admin_value directives per server in httpd.conf?

httpd 5.6 stable crashed when nmap scanned

From email:

I had httpd go down when an nmap scan came in recently on a snapshot
from Sept 10th. This was reproducible with zenmap on OpenBSD set to the
default "Intense Scan" at 127.0.0.1 which I usefully also had on my
laptop for testing.

500 httpd error with owncloud

From misc@

Hello everyone,

I installed the owncloud server from ports, and tried to get it running with
the new httpd. Unfortunately, I get a "500 Internal Server Error" once I log
in. However, the login page is shown perfectly fine.

Here is the server log, when I run the server in debug/verbose mode without
demonizing:

    default 192.168.178.18 - - [28/Dec/2014:10:29:52 +0100] "GET /owncloud/index.php/apps/files/ HTTP/1.1" 500 0
    server default, client 5 (1 active), 192.168.178.18:49545 -> 192.168.178.49, /owncloud/index.php/apps/files/ (500 Internal Server Error)

IMHO, neither the server error log nor the owncloud log provide any evidence
to locate the error. Since I am not a developer, I would appreciate any help
you could give me to solve this error.

The owncloud installation is standard, using /owncloud-data as data directory,
and sqlite3 as database. I installed owncloud by directly downloading it from
owncloud.org; however, the error remainded unchanged.

Thanks in advance,
Clemens

gzip

Does the new httpd support gzip compression?
Planned?

Relaxing httpd.conf(5) list parsing

Via email:

At present httpd.conf(5) permits:

    ssl { certificate "/etc/ssl/my.crt", key "/etc/ssl/private/my.key" }

or

    ssl {   certificate "/etc/ssl/my.crt"
            key "/etc/ssl/private/my.key" }

but not:

    ssl {
            certificate "/etc/ssl/my.crt"
            key "/etc/ssl/private/my.key"
    }

Would you consider relaxing the parser to permit that?

Confusing parse.y variable behaviour inside redirects

Consider the following configuration file:

domain="yourdomain.com"
ext_if="egress"

server $domain {
        listen on $ext_if port 80
        block return 301 "https://$domain$REQUEST_URI"
}

server $domain {
        listen on $ext_if tls port 443
}

Currently it doesn't substitute $domain inside URI, so it ends up redirecting to literally "$domain$REQUEST_URI" instead of redirecting to "yourdomain.com$REQUEST_URI".

I've checked both OpenBSD 5.7 release version of httpd(8) and the current one.

Is it an appropriate behaviour?

Support for Content-Length in FastCGI responses

httpd/server_fcgi.c: Line 590
Comment asks whether Content-Length should be handled correctly, rather than stripped from the response in favour of chunked encoding. This Content-Length header is often needed by the application, and can be useful for output to httpd's access.log (which will otherwise show length=0).

Persistent connection issue in FastCGI - with potential fix

When using "Connection: keep-alive" on a FastCGI request, the first request is handled correctly through FastCGI, and the response is set to chunked encoding and decodes correctly at the client.

However, subsequent requests on the same connection are showing the following problems:

  1. The request is sent through to the FastCGI handler using the headers from the original request, not the current request
  2. The response headers sent back to the client are started with a chunk size indicator, instead of the raw header block itself. It is behaving as if the response headers have already been read and processed, which is not the case here.

I suspect that there might be a state variable(s) not being reset after the original request is handled, which messes up every subsequent request.

authentication (ldap, ...)

It would be nice to have authentication and controlled access with the most common features :

  • bsdauth
  • passwd file (classical)
  • ldap
  • whatever ?

(I believe it's already planned, but I really look forward to see this feature, so I land an issue to follow it.)

Thanks for the great work already done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.