chunkqueue_peek_data() experiment to mmap temporary files
(currently disabled in code due to not making measurable difference
in throughput (during a specific microbenchmark load test),
though it does reduce CPU use by ~10% in the same microbenchmark)
enabling this may cause large spikes in RSS mem usage reported by the
system, due to the read-only memory maps of the temporary files,
but this is nothing to be alarmed about, as the memory maps are
file-backed and read-only, so minimally add to memory pressure
allow up to 32k of data frames per stream per round
(previously limited to single max_frame_size (default 16k))
For 8 streams, 32k*8 is 256k, which is current lighttpd MAX_WRITE_LIMIT,
so each stream still gets a chance to write data (unless write queue
not emptied on previous attempt, reducing add limit this round)
cap size of data framed for HTTP/2 response until more data sent to
client
make sure to reschedule connection in job queue if max_bytes reached
and then the entire con->write_queue was flushed to network, or else
there is a chance the request may not get rescheduled (and then will
timeout) if the request is completed from the backend and there is
no other traffic or streams to trigger connection processing.
(check con->write_queue > 8k rather than empty from last round,
since small frames such as connection preface may have been added
this round while processing con->read_queue)
ignore SIGINT, SIGUSR1 in fcgi-responder if HAVE_SIGNAL is defined
(must be defined separately since config.h is not included)
Not required for test framework. Added as an example in the code,
e.g. if code is reused with lighttpd and graceful shutdown or restart.
(backend will be sent SIGTERM when server is ready to restart)
disable streaming response while processing "authorizer" mode
until "authorizer" response 200 OK from the backend is complete
(thx jefftharris)
x-ref:
"FastCGI authorizer hang with server.stream-response-body"
https://redmine.lighttpd.net/boards/2/topics/9969
"FastCGI authorizer hang with server.stream-response-body"
https://redmine.lighttpd.net/issues/3106
r->gw_dechunk->b is not a candidate for using generic chunk buffers.
chunked headers are generally smaller and fit in default 64 byte alloc.
Also, lighttpd limits chunked header to 1k.
Avoid unneeded optimization since HTTP/1.1 use is likely to diminish
over time in favor of HTTP/2 or HTTP/3 or later.
fix edge case for initial chunked data
(bug introduced in lighttpd 1.4.56)
If chunked header received without data before response headers sent,
then initial chunked data might be sent to client without chunked header
if client made an HTTP/1.1 request and the response is Transfer-Encoding
chunked and lighttpd is configured to stream the response (non-zero
value for server.stream-response-body). This might occur if lighttpd
backend is connected via a unix domain socket and the initial chunk is
large and coming from a temporary file. It may be sent in a separate
packet since lighttpd does not use TCP_CORK on unix domain sockets.
x-ref:
"Failure on second request in http proxy backend"
https://redmine.lighttpd.net/issues/3046
"Socket errors after update to version 1.4.56"
https://redmine.lighttpd.net/issues/3044
splice() data from backends to tempfiles (where splice() is available);
reduce copying data to userspace when writing data to tempfiles
Note: splice() on Linux returns EINVAL if target file has O_APPEND set
so lighttpd uses pwrite() (where available) when writing to tempfiles
(instead of lseek() + write(), or O_APPEND and write())
Note: Under _WIN32, serious limitation in Windows APIs:
select() and WSAPoll() operate only on sockets (not pipes)
(directly affects mod_cgi; not currently handled)
remove redundant checks for tempfile chunk reuse
c->file.is_temp is only set if c->type == FILE_CHUNK is also true
The test for (0 == c->offset) is historical. Before the temporary files
were opened O_APPEND (or written to using pwrite()), the file offset may
have changed via lseek() if lighttpd had started reading the file to
send to the client. To avoid this, the (0 == c->offset) check was used
as a quick check to avoid continuing to write to a temporary file that
lighttpd had begun to read.
rename chunkqueue_get_append_tempfile()
-> chunkqueue_get_append_newtempfile()
pull some code from chunkqueue_append_mem_to_tempfile()
into smaller func for (new func) chunkqueue_get_append_tempfile(),
which might call into chunkqueue_get_append_newtempfile()
pull some code from chunkqueue_append_mem_to_tempfile()
into smaller func chunkqueue_append_tempfile_err()
to handle write errors with respect to removing empty chunk
and stepping to next configured tempdir
server.feature-flags += ("server.errorlog-high-precision" => "enable")
Note: if using syslog() for errorlog, modern syslog implementations are
configured separately (by an admin) for high precision timestamps;
server.feature-flags has no effect on syslog-generated timestamps
restructure some of log.c into smaller internal routines
keep a file-scoped global log_stderrh to write to STDERR_FILENO
so that an errh handle is always available for logging errors
fix missing space between timestamp and filename in errorlog output
(missing space in lighttpd 1.4.58 and lighttpd 1.4.59) (fixes #3105)
x-ref:
"missing ( in log lines from mod_auth"
https://redmine.lighttpd.net/issues/3105
separate internal control for backend max_per_read
When not streaming, large reads will be flushed to temp files on disk.
When streaming, use a smaller buffer to help reduce memory usage.
When not streaming, attempt to read and empty kernel socket bufs.
(e.g. MAX_READ_LIMIT 256k)
When writing to sockets (or pipes) attempt to fill kernel socket bufs.
(e.g. MAX_WRITE_LIMIT 256k)
file names tend to be much shorter than chunk_buf_sz
so using separate pool saves memory for large request and
response bodies where many temporary files are collected
HTTP/2 send GOAWAY soon after client timeout, before potentially
reading new stream requests, which will then have to be reset.
x-ref:
"Chrome gives random net::ERR_HTTP2_PROTOCOL_ERROR"
https://redmine.lighttpd.net/issues/3102
default backend "connect-timeout" to 8 seconds
Though this is is a behavior change where there previously was no
timeout, this is configurable by lighttpd.conf admin, and having a
default connection timeout of a fairly large value (8 seconds) puts
a (default) limit on resource usage waiting for socket connect().
x-ref:
"sockets disabled, out-of-fds with proxy module"
https://redmine.lighttpd.net/issues/3086
HTTP/2 send GOAWAY soon after keep-alive timeout, before potentially
reading new stream requests, which will then have to be reset.
x-ref:
"Chrome gives random net::ERR_HTTP2_PROTOCOL_ERROR"
https://redmine.lighttpd.net/issues/3102
reduce oversized memory allocations when reading from backends:
avoid extra power-2 allocation for 1 byte ('\0') when data
available to read is exactly power-2
(detect if client erroneously reuses stream id for a different request)
x-ref:
"Chrome gives random net::ERR_HTTP2_PROTOCOL_ERROR"
https://redmine.lighttpd.net/issues/3102
refuse excess streams only if would block DATA frames for active streams
(for excess streams received on initial connect, prior to receiving
SETTINGS ACK from client)
(thx flynn)
x-ref:
"Random TLS errors on established connections"
https://redmine.lighttpd.net/issues/3100
"Chrome 92, HTTP/2, fcgi, mutiple puts no response"
https://redmine.lighttpd.net/issues/3093
use shared temp buffer for preparing error log entries
(each error log entry is flushed to error log;
there is no persistent data buffering for error logs)
prefer per-request r->tmp_buf to per-module p->tmp_buf
to marginally increase buf reuse during each request.
(currently, r->tmp_buf == srv->tmp_buf)
(avoid some persistent memory allocations per-module,
as those are not currently cleared/released periodically)
(thx flynn)
clear buffer after backend dechunk if not sending chunked to client
x-ref:
"Memory fragmentation with HTTP/2 enabled"
https://redmine.lighttpd.net/issues/3084