lighttpd 1.4.x https://www.lighttpd.net/
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

510 lines
18 KiB

[core] open fd when appending file to cq (fixes #2655) http_chunk_append_file() opens fd when appending file to chunkqueue. Defers calculation of content length until response is finished. This reduces race conditions pertaining to stat() and then (later) open(), when the result of the stat() was used for Content-Length or to generate chunked headers. Note: this does not change how lighttpd handles files that are modified in-place by another process after having been opened by lighttpd -- don't do that. This *does* improve handling of files that are frequently modified via a temporary file and then atomically renamed into place. mod_fastcgi has been modified to use http_chunk_append_file_range() with X-Sendfile2 and will open the target file multiple times if there are multiple ranges. Note: (future todo) not implemented for chunk.[ch] interfaces used by range requests in mod_staticfile or by mod_ssi. Those uses could lead to too many open fds. For mod_staticfile, limits should be put in place for max number of ranges accepted by mod_staticfile. For mod_ssi, limits would need to be placed on the maximum number of includes, and the primary SSI file split across lots of SSI directives should either copy the pieces or perhaps chunk.h could be extended to allow for an open fd to be shared across multiple chunks. Doing either of these would improve the performance of SSI since they would replace many file opens on the pieces of the SSI file around the SSI directives. x-ref: "Serving a file that is getting updated can cause an empty response or incorrect content-length error" https://redmine.lighttpd.net/issues/2655 github: Closes #49
5 years ago
[core] open fd when appending file to cq (fixes #2655) http_chunk_append_file() opens fd when appending file to chunkqueue. Defers calculation of content length until response is finished. This reduces race conditions pertaining to stat() and then (later) open(), when the result of the stat() was used for Content-Length or to generate chunked headers. Note: this does not change how lighttpd handles files that are modified in-place by another process after having been opened by lighttpd -- don't do that. This *does* improve handling of files that are frequently modified via a temporary file and then atomically renamed into place. mod_fastcgi has been modified to use http_chunk_append_file_range() with X-Sendfile2 and will open the target file multiple times if there are multiple ranges. Note: (future todo) not implemented for chunk.[ch] interfaces used by range requests in mod_staticfile or by mod_ssi. Those uses could lead to too many open fds. For mod_staticfile, limits should be put in place for max number of ranges accepted by mod_staticfile. For mod_ssi, limits would need to be placed on the maximum number of includes, and the primary SSI file split across lots of SSI directives should either copy the pieces or perhaps chunk.h could be extended to allow for an open fd to be shared across multiple chunks. Doing either of these would improve the performance of SSI since they would replace many file opens on the pieces of the SSI file around the SSI directives. x-ref: "Serving a file that is getting updated can cause an empty response or incorrect content-length error" https://redmine.lighttpd.net/issues/2655 github: Closes #49
5 years ago
fix buffer, chunk and http_chunk API * remove unused structs and functions (buffer_array, read_buffer) * change return type from int to void for many functions, as the return value (indicating error/success) was never checked, and the function would only fail on programming errors and not on invalid input; changed functions to use force_assert instead of returning an error. * all "len" parameters now are the real size of the memory to be read. the length of strings is given always without the terminating 0. * the "buffer" struct still counts the terminating 0 in ->used, provide buffer_string_length() to get the length of a string in a buffer. unset config "strings" have used == 0, which is used in some places to distinguish unset values from "" (empty string) values. * most buffer usages should now use it as string container. * optimise some buffer copying by "moving" data to other buffers * use (u)intmax_t for generic int-to-string functions * remove unused enum values: UNUSED_CHUNK, ENCODING_UNSET * converted BUFFER_APPEND_SLASH to inline function (no macro feature needed) * refactor: create chunkqueue_steal: moving (partial) chunks into another queue * http_chunk: added separate function to terminate chunked body instead of magic handling in http_chunk_append_mem(). http_chunk_append_* now handle empty chunks, and never terminate the chunked body. From: Stefan Bühler <stbuehler@web.de> git-svn-id: svn://svn.lighttpd.net/lighttpd/branches/lighttpd-1.4.x@2975 152afb58-edef-0310-8abb-c4023f1b3aa9
7 years ago
[core] open fd when appending file to cq (fixes #2655) http_chunk_append_file() opens fd when appending file to chunkqueue. Defers calculation of content length until response is finished. This reduces race conditions pertaining to stat() and then (later) open(), when the result of the stat() was used for Content-Length or to generate chunked headers. Note: this does not change how lighttpd handles files that are modified in-place by another process after having been opened by lighttpd -- don't do that. This *does* improve handling of files that are frequently modified via a temporary file and then atomically renamed into place. mod_fastcgi has been modified to use http_chunk_append_file_range() with X-Sendfile2 and will open the target file multiple times if there are multiple ranges. Note: (future todo) not implemented for chunk.[ch] interfaces used by range requests in mod_staticfile or by mod_ssi. Those uses could lead to too many open fds. For mod_staticfile, limits should be put in place for max number of ranges accepted by mod_staticfile. For mod_ssi, limits would need to be placed on the maximum number of includes, and the primary SSI file split across lots of SSI directives should either copy the pieces or perhaps chunk.h could be extended to allow for an open fd to be shared across multiple chunks. Doing either of these would improve the performance of SSI since they would replace many file opens on the pieces of the SSI file around the SSI directives. x-ref: "Serving a file that is getting updated can cause an empty response or incorrect content-length error" https://redmine.lighttpd.net/issues/2655 github: Closes #49
5 years ago
[core] open fd when appending file to cq (fixes #2655) http_chunk_append_file() opens fd when appending file to chunkqueue. Defers calculation of content length until response is finished. This reduces race conditions pertaining to stat() and then (later) open(), when the result of the stat() was used for Content-Length or to generate chunked headers. Note: this does not change how lighttpd handles files that are modified in-place by another process after having been opened by lighttpd -- don't do that. This *does* improve handling of files that are frequently modified via a temporary file and then atomically renamed into place. mod_fastcgi has been modified to use http_chunk_append_file_range() with X-Sendfile2 and will open the target file multiple times if there are multiple ranges. Note: (future todo) not implemented for chunk.[ch] interfaces used by range requests in mod_staticfile or by mod_ssi. Those uses could lead to too many open fds. For mod_staticfile, limits should be put in place for max number of ranges accepted by mod_staticfile. For mod_ssi, limits would need to be placed on the maximum number of includes, and the primary SSI file split across lots of SSI directives should either copy the pieces or perhaps chunk.h could be extended to allow for an open fd to be shared across multiple chunks. Doing either of these would improve the performance of SSI since they would replace many file opens on the pieces of the SSI file around the SSI directives. x-ref: "Serving a file that is getting updated can cause an empty response or incorrect content-length error" https://redmine.lighttpd.net/issues/2655 github: Closes #49
5 years ago
[core] open fd when appending file to cq (fixes #2655) http_chunk_append_file() opens fd when appending file to chunkqueue. Defers calculation of content length until response is finished. This reduces race conditions pertaining to stat() and then (later) open(), when the result of the stat() was used for Content-Length or to generate chunked headers. Note: this does not change how lighttpd handles files that are modified in-place by another process after having been opened by lighttpd -- don't do that. This *does* improve handling of files that are frequently modified via a temporary file and then atomically renamed into place. mod_fastcgi has been modified to use http_chunk_append_file_range() with X-Sendfile2 and will open the target file multiple times if there are multiple ranges. Note: (future todo) not implemented for chunk.[ch] interfaces used by range requests in mod_staticfile or by mod_ssi. Those uses could lead to too many open fds. For mod_staticfile, limits should be put in place for max number of ranges accepted by mod_staticfile. For mod_ssi, limits would need to be placed on the maximum number of includes, and the primary SSI file split across lots of SSI directives should either copy the pieces or perhaps chunk.h could be extended to allow for an open fd to be shared across multiple chunks. Doing either of these would improve the performance of SSI since they would replace many file opens on the pieces of the SSI file around the SSI directives. x-ref: "Serving a file that is getting updated can cause an empty response or incorrect content-length error" https://redmine.lighttpd.net/issues/2655 github: Closes #49
5 years ago
  1. /*
  2. * http_chunk - append response to chunkqueue, possibly in "chunked" encoding
  3. *
  4. * Fully-rewritten from original
  5. * Copyright(c) 2019 Glenn Strauss gstrauss()gluelogic.com All rights reserved
  6. * License: BSD 3-clause (same as lighttpd)
  7. */
  8. #include "first.h"
  9. #include "http_chunk.h"
  10. #include "base.h"
  11. #include "chunk.h"
  12. #include "stat_cache.h"
  13. #include "fdevent.h"
  14. #include "log.h"
  15. #include <sys/types.h>
  16. #include <sys/stat.h>
  17. #include <stdlib.h>
  18. #include <unistd.h>
  19. #include <errno.h>
  20. #include <string.h>
  21. static void http_chunk_len_append(chunkqueue * const cq, uintmax_t len) {
  22. char buf[24]; /* 64-bit (8 bytes) is 16 hex chars (+2 \r\n, +1 \0 = 19) */
  23. #if 0
  24. buffer b = { buf, 0, sizeof(buf) };
  25. buffer_append_uint_hex(&b, len);
  26. buffer_append_string_len(&b, CONST_STR_LEN("\r\n"));
  27. chunkqueue_append_mem(cq, b.ptr, b.used-1);
  28. #else
  29. int i = (int)(sizeof(buf));
  30. buf[--i] = '\n';
  31. buf[--i] = '\r';
  32. do { buf[--i] = "0123456789abcdef"[len & 0x0F]; } while (len >>= 4);
  33. chunkqueue_append_mem(cq, buf+i, sizeof(buf)-i);
  34. #endif
  35. }
  36. static int http_chunk_len_append_tempfile(chunkqueue * const cq, uintmax_t len, log_error_st * const errh) {
  37. char buf[24]; /* 64-bit (8 bytes) is 16 hex chars (+2 \r\n, +1 \0 = 19) */
  38. #if 0
  39. buffer b = { buf, 0, sizeof(buf) };
  40. buffer_append_uint_hex(&b, len);
  41. buffer_append_string_len(&b, CONST_STR_LEN("\r\n"));
  42. return chunkqueue_append_mem_to_tempfile(cq, b.ptr, b.used-1, errh);
  43. #else
  44. int i = (int)(sizeof(buf));
  45. buf[--i] = '\n';
  46. buf[--i] = '\r';
  47. do { buf[--i] = "0123456789abcdef"[len & 0x0F]; } while (len >>= 4);
  48. return chunkqueue_append_mem_to_tempfile(cq, buf+i, sizeof(buf)-i, errh);
  49. #endif
  50. }
  51. static int http_chunk_append_file_open_fstat(const request_st * const r, const buffer * const fn, struct stat * const st) {
  52. return
  53. (r->conf.follow_symlink
  54. || !stat_cache_path_contains_symlink(fn, r->conf.errh))
  55. ? stat_cache_open_rdonly_fstat(fn, st, r->conf.follow_symlink)
  56. : -1;
  57. }
  58. static int http_chunk_append_read_fd_range(request_st * const r, const buffer * const fn, const int fd, off_t offset, off_t len) {
  59. /* note: this routine should not be used for range requests
  60. * unless the total size of ranges requested is small */
  61. /* note: future: could read into existing MEM_CHUNK in cq->last if
  62. * there is sufficient space, but would need to adjust for existing
  63. * offset in for cq->bytes_in in chunkqueue_append_buffer_commit() */
  64. UNUSED(fn);
  65. chunkqueue * const cq = &r->write_queue;
  66. if (r->resp_send_chunked)
  67. http_chunk_len_append(cq, (uintmax_t)len);
  68. if (0 != offset && -1 == lseek(fd, offset, SEEK_SET)) return -1;
  69. buffer * const b = chunkqueue_append_buffer_open_sz(cq, len+2+1);
  70. ssize_t rd;
  71. offset = 0;
  72. do {
  73. rd = read(fd, b->ptr+offset, len-offset);
  74. } while (rd > 0 ? (offset += rd, len -= rd) : errno == EINTR);
  75. buffer_commit(b, offset);
  76. if (r->resp_send_chunked)
  77. buffer_append_string_len(b, CONST_STR_LEN("\r\n"));
  78. chunkqueue_append_buffer_commit(cq);
  79. return (rd >= 0) ? 0 : -1;
  80. }
  81. void http_chunk_append_file_ref_range(request_st * const r, stat_cache_entry * const sce, const off_t offset, const off_t len) {
  82. chunkqueue * const cq = &r->write_queue;
  83. if (r->resp_send_chunked)
  84. http_chunk_len_append(cq, (uintmax_t)len);
  85. const buffer * const fn = &sce->name;
  86. const int fd = sce->fd;
  87. chunkqueue_append_file_fd(cq, fn, fd, offset, len);
  88. if (fd >= 0) {
  89. chunk * const d = cq->last;
  90. d->file.ref = sce;
  91. d->file.refchg = stat_cache_entry_refchg;
  92. stat_cache_entry_refchg(sce, 1);
  93. }
  94. if (r->resp_send_chunked)
  95. chunkqueue_append_mem(cq, CONST_STR_LEN("\r\n"));
  96. }
  97. void http_chunk_append_file_fd_range(request_st * const r, const buffer * const fn, const int fd, const off_t offset, const off_t len) {
  98. chunkqueue * const cq = &r->write_queue;
  99. if (r->resp_send_chunked)
  100. http_chunk_len_append(cq, (uintmax_t)len);
  101. chunkqueue_append_file_fd(cq, fn, fd, offset, len);
  102. if (r->resp_send_chunked)
  103. chunkqueue_append_mem(cq, CONST_STR_LEN("\r\n"));
  104. }
  105. int http_chunk_append_file_range(request_st * const r, const buffer * const fn, const off_t offset, off_t len) {
  106. struct stat st;
  107. const int fd = http_chunk_append_file_open_fstat(r, fn, &st);
  108. if (fd < 0) return -1;
  109. if (-1 == len) {
  110. if (offset >= st.st_size) {
  111. close(fd);
  112. return (offset == st.st_size) ? 0 : -1;
  113. }
  114. len = st.st_size - offset;
  115. }
  116. else if (st.st_size - offset < len) {
  117. close(fd);
  118. return -1;
  119. }
  120. http_chunk_append_file_fd_range(r, fn, fd, offset, len);
  121. return 0;
  122. }
  123. int http_chunk_append_file(request_st * const r, const buffer * const fn) {
  124. struct stat st;
  125. const int fd = http_chunk_append_file_open_fstat(r, fn, &st);
  126. if (fd < 0) return -1;
  127. http_chunk_append_file_fd(r, fn, fd, st.st_size);
  128. return 0;
  129. }
  130. int http_chunk_append_file_fd(request_st * const r, const buffer * const fn, const int fd, const off_t sz) {
  131. if (sz > 32768 || !r->resp_send_chunked) {
  132. http_chunk_append_file_fd_range(r, fn, fd, 0, sz);
  133. return 0;
  134. }
  135. /*(read small files into memory)*/
  136. int rc = (0 != sz) ? http_chunk_append_read_fd_range(r,fn,fd,0,sz) : 0;
  137. close(fd);
  138. return rc;
  139. }
  140. int http_chunk_append_file_ref(request_st * const r, stat_cache_entry * const sce) {
  141. const off_t sz = sce->st.st_size;
  142. if (sz > 32768 || !r->resp_send_chunked) {
  143. http_chunk_append_file_ref_range(r, sce, 0, sz);
  144. return 0;
  145. }
  146. /*(read small files into memory)*/
  147. const buffer * const fn = &sce->name;
  148. const int fd = sce->fd;
  149. int rc = (0 != sz) ? http_chunk_append_read_fd_range(r,fn,fd,0,sz) : 0;
  150. return rc;
  151. }
  152. static int http_chunk_append_to_tempfile(request_st * const r, const char * const mem, const size_t len) {
  153. chunkqueue * const cq = &r->write_queue;
  154. log_error_st * const errh = r->conf.errh;
  155. if (r->resp_send_chunked
  156. && 0 != http_chunk_len_append_tempfile(cq, len, errh))
  157. return -1;
  158. if (0 != chunkqueue_append_mem_to_tempfile(cq, mem, len, errh))
  159. return -1;
  160. if (r->resp_send_chunked
  161. && 0 !=
  162. chunkqueue_append_mem_to_tempfile(cq, CONST_STR_LEN("\r\n"), errh))
  163. return -1;
  164. return 0;
  165. }
  166. static int http_chunk_append_cq_to_tempfile(request_st * const r, chunkqueue * const src, const size_t len) {
  167. chunkqueue * const cq = &r->write_queue;
  168. log_error_st * const errh = r->conf.errh;
  169. if (r->resp_send_chunked
  170. && 0 != http_chunk_len_append_tempfile(cq, len, errh))
  171. return -1;
  172. if (0 != chunkqueue_steal_with_tempfiles(cq, src, len, errh))
  173. return -1;
  174. if (r->resp_send_chunked
  175. && 0 !=
  176. chunkqueue_append_mem_to_tempfile(cq, CONST_STR_LEN("\r\n"), errh))
  177. return -1;
  178. return 0;
  179. }
  180. __attribute_pure__
  181. static int http_chunk_uses_tempfile(const request_st * const r, const chunkqueue * const cq, const size_t len) {
  182. /* current usage does not append_mem or append_buffer after appending
  183. * file, so not checking if users of this interface have appended large
  184. * (references to) files to chunkqueue, which would not be in memory
  185. * (but included in calculation for whether or not to use temp file) */
  186. /*(allow slightly larger mem use if FDEVENT_STREAM_RESPONSE_BUFMIN
  187. * to reduce creation of temp files when backend producer will be
  188. * blocked until more data is sent to network to client)*/
  189. const chunk * const c = cq->last;
  190. return
  191. ((c && c->type == FILE_CHUNK && c->file.is_temp)
  192. || chunkqueue_length(cq) + len
  193. > ((r->conf.stream_response_body & FDEVENT_STREAM_RESPONSE_BUFMIN)
  194. ? 128*1024
  195. : 64*1024));
  196. }
  197. int http_chunk_append_buffer(request_st * const r, buffer * const mem) {
  198. size_t len = buffer_string_length(mem);
  199. if (0 == len) return 0;
  200. chunkqueue * const cq = &r->write_queue;
  201. if (http_chunk_uses_tempfile(r, cq, len))
  202. return http_chunk_append_to_tempfile(r, mem->ptr, len);
  203. if (r->resp_send_chunked)
  204. http_chunk_len_append(cq, len);
  205. /*(chunkqueue_append_buffer() might steal buffer contents)*/
  206. chunkqueue_append_buffer(cq, mem);
  207. if (r->resp_send_chunked)
  208. chunkqueue_append_mem(cq, CONST_STR_LEN("\r\n"));
  209. return 0;
  210. }
  211. int http_chunk_append_mem(request_st * const r, const char * const mem, const size_t len) {
  212. if (0 == len) return 0;
  213. force_assert(NULL != mem);
  214. chunkqueue * const cq = &r->write_queue;
  215. if (http_chunk_uses_tempfile(r, cq, len))
  216. return http_chunk_append_to_tempfile(r, mem, len);
  217. if (r->resp_send_chunked)
  218. http_chunk_len_append(cq, len);
  219. chunkqueue_append_mem(cq, mem, len);
  220. if (r->resp_send_chunked)
  221. chunkqueue_append_mem(cq, CONST_STR_LEN("\r\n"));
  222. return 0;
  223. }
  224. int http_chunk_transfer_cqlen(request_st * const r, chunkqueue * const src, const size_t len) {
  225. if (0 == len) return 0;
  226. chunkqueue * const cq = &r->write_queue;
  227. if (http_chunk_uses_tempfile(r, cq, len))
  228. return http_chunk_append_cq_to_tempfile(r, src, len);
  229. if (r->resp_send_chunked)
  230. http_chunk_len_append(cq, len);
  231. chunkqueue_steal(cq, src, len);
  232. if (r->resp_send_chunked)
  233. chunkqueue_append_mem(cq, CONST_STR_LEN("\r\n"));
  234. return 0;
  235. }
  236. void http_chunk_close(request_st * const r) {
  237. if (!r->resp_send_chunked) return;
  238. if (r->gw_dechunk && !buffer_string_is_empty(&r->gw_dechunk->b)) {
  239. /* XXX: trailers passed through; no sanity check currently done */
  240. chunkqueue_append_buffer(&r->write_queue, &r->gw_dechunk->b);
  241. if (!r->gw_dechunk->done)
  242. r->keep_alive = 0;
  243. }
  244. else
  245. chunkqueue_append_mem(&r->write_queue, CONST_STR_LEN("0\r\n\r\n"));
  246. }
  247. static int
  248. http_chunk_decode_append_data (request_st * const r, const char *mem, off_t len)
  249. {
  250. /*(silently discard data, if any, after final \r\n)*/
  251. if (r->gw_dechunk->done) return 0;
  252. buffer * const h = &r->gw_dechunk->b;
  253. off_t te_chunked = r->gw_dechunk->gw_chunked;
  254. while (len) {
  255. if (0 == te_chunked) {
  256. const char *p = strchr(mem, '\n');
  257. /*(likely better ways to handle chunked header crossing chunkqueue
  258. * chunks, but this situation is not expected to occur frequently)*/
  259. if (NULL == p) { /* incomplete HTTP chunked header line */
  260. uint32_t hlen = buffer_string_length(h);
  261. if ((off_t)(1024 - hlen) < len) {
  262. log_error(r->conf.errh, __FILE__, __LINE__,
  263. "chunked header line too long");
  264. return -1;
  265. }
  266. buffer_append_string_len(h, mem, len);
  267. break;
  268. }
  269. off_t hsz = ++p - mem;
  270. unsigned char *s = (unsigned char *)mem;
  271. if (!buffer_string_is_empty(h)) {
  272. uint32_t hlen = buffer_string_length(h);
  273. if (NULL == memchr(h->ptr, '\n', hlen)) {
  274. if ((off_t)(1024 - hlen) < hsz) {
  275. log_error(r->conf.errh, __FILE__, __LINE__,
  276. "chunked header line too long");
  277. return -1;
  278. }
  279. buffer_append_string_len(h, mem, hsz);
  280. }
  281. s = (unsigned char *)h->ptr;
  282. }
  283. for (unsigned char u; (u=(unsigned char)hex2int(*s))!=0xFF; ++s) {
  284. if (te_chunked > (off_t)(1uLL<<(8*sizeof(off_t)-5))-1) {
  285. log_error(r->conf.errh, __FILE__, __LINE__,
  286. "chunked data size too large");
  287. return -1;
  288. }
  289. te_chunked <<= 4;
  290. te_chunked |= u;
  291. }
  292. if ((char *)s == mem || (char *)s == h->ptr) return -1; /*(no hex)*/
  293. while (*s == ' ' || *s == '\t') ++s;
  294. if (*s != '\r' && *s != ';') { /*(not strictly checking \r\n)*/
  295. log_error(r->conf.errh, __FILE__, __LINE__,
  296. "chunked header invalid chars");
  297. return -1;
  298. }
  299. if (0 == te_chunked) {
  300. /* do not consume final chunked header until
  301. * (optional) trailers received along with
  302. * request-ending blank line "\r\n" */
  303. if (len - hsz == 2 && p[0] == '\r' && p[1] == '\n') {
  304. /* common case with no trailers; final \r\n received */
  305. /*(silently discard data, if any, after final \r\n)*/
  306. #if 0 /*(avoid allocation for common case; users must check)*/
  307. if (buffer_is_empty(h))
  308. buffer_copy_string_len(h, CONST_STR_LEN("0\r\n\r\n"));
  309. #else
  310. buffer_clear(h);
  311. #endif
  312. r->gw_dechunk->done = r->http_status;
  313. break;
  314. }
  315. /* accumulate trailers and check for end of trailers */
  316. /* XXX: reuse r->conf.max_request_field_size
  317. * or have separate limit? */
  318. uint32_t hlen = buffer_string_length(h);
  319. if ((off_t)(r->conf.max_request_field_size - hlen) < hsz) {
  320. /* truncate excessively long trailers */
  321. r->gw_dechunk->done = r->http_status;
  322. hsz = (off_t)(r->conf.max_request_field_size - hlen);
  323. buffer_append_string_len(h, mem, hsz);
  324. p = strrchr(h->ptr, '\n');
  325. if (NULL != p)
  326. buffer_string_set_length(h, p + 1 - h->ptr);
  327. else { /*(should not happen)*/
  328. buffer_clear(h);
  329. buffer_append_string_len(h, CONST_STR_LEN("0\r\n"));
  330. }
  331. buffer_append_string_len(h, CONST_STR_LEN("\r\n"));
  332. break;
  333. }
  334. buffer_append_string_len(h, mem, hsz);
  335. hlen += (uint32_t)hsz; /* uint32_t fits in (buffer *) */
  336. if (hlen < 4) break;
  337. p = h->ptr + hlen - 4;
  338. if (p[0]=='\r'&&p[1]=='\n'&&p[2]=='\r'&&p[3]=='\n')
  339. r->gw_dechunk->done = r->http_status;
  340. else if ((p = strstr(h->ptr, "\r\n\r\n"))) {
  341. r->gw_dechunk->done = r->http_status;
  342. /*(silently discard data, if any, after final \r\n)*/
  343. buffer_string_set_length(h, (uint32_t)(p+4-h->ptr));
  344. }
  345. break;
  346. }
  347. mem += hsz;
  348. len -= hsz;
  349. if (te_chunked > (off_t)(1uLL<<(8*sizeof(off_t)-5))-1-2) {
  350. log_error(r->conf.errh, __FILE__, __LINE__,
  351. "chunked data size too large");
  352. return -1;
  353. }
  354. te_chunked += 2; /*(for trailing "\r\n" after chunked data)*/
  355. }
  356. if (te_chunked >= 2) {
  357. off_t clen = te_chunked - 2;
  358. if (clen > len) clen = len;
  359. if (0 != http_chunk_append_mem(r, mem, clen))
  360. return -1;
  361. mem += clen;
  362. len -= clen;
  363. te_chunked -= clen;
  364. if (te_chunked == 2) {
  365. if (len >= 2) {
  366. if (mem[0] != '\r' || mem[1] != '\n') return -1;
  367. mem += 2;
  368. len -= 2;
  369. te_chunked = 0;
  370. }
  371. else if (len == 1 && mem[0] != '\r') return -1;
  372. }
  373. }
  374. else if (1 == te_chunked) {
  375. /* finish reading chunk block "\r\n" */
  376. if (mem[0] != '\n') return -1;
  377. ++mem;
  378. --len;
  379. te_chunked = 0;
  380. }
  381. }
  382. r->gw_dechunk->gw_chunked = te_chunked;
  383. return 0;
  384. }
  385. int http_chunk_decode_append_buffer(request_st * const r, buffer * const mem)
  386. {
  387. /*(called by funcs receiving data from backends, which might be chunked)*/
  388. /*(separate from http_chunk_append_buffer() called by numerous others)*/
  389. if (!r->resp_decode_chunked)
  390. return http_chunk_append_buffer(r, mem);
  391. /* no need to decode chunked to immediately re-encode chunked,
  392. * though would be more robust to still validate chunk lengths sent
  393. * (or else we might wait for keep-alive while client waits for final chunk)
  394. * Before finishing response/stream, we *are not* checking if we got final
  395. * chunk of chunked encoding from backend. If we were, we could consider
  396. * closing HTTP/1.0 and HTTP/1.1 connections (no keep-alive), and in HTTP/2
  397. * we could consider sending RST_STREAM error. http_chunk_close() would
  398. * only handle case of streaming chunked to client */
  399. if (r->resp_send_chunked) {
  400. r->resp_send_chunked = 0;
  401. int rc = http_chunk_append_buffer(r, mem); /* might append to tmpfile */
  402. r->resp_send_chunked = 1;
  403. return rc;
  404. }
  405. /* might avoid copy by transferring buffer if buffer is all data that is
  406. * part of large chunked block, but choosing to *not* expand that out here*/
  407. return http_chunk_decode_append_data(r, CONST_BUF_LEN(mem));
  408. }
  409. int http_chunk_decode_append_mem(request_st * const r, const char * const mem, const size_t len)
  410. {
  411. /*(called by funcs receiving data from backends, which might be chunked)*/
  412. /*(separate from http_chunk_append_mem() called by numerous others)*/
  413. if (!r->resp_decode_chunked)
  414. return http_chunk_append_mem(r, mem, len);
  415. /* no need to decode chunked to immediately re-encode chunked,
  416. * though would be more robust to still validate chunk lengths sent
  417. * (or else we might wait for keep-alive while client waits for final chunk)
  418. * Before finishing response/stream, we *are not* checking if we got final
  419. * chunk of chunked encoding from backend. If we were, we could consider
  420. * closing HTTP/1.0 and HTTP/1.1 connections (no keep-alive), and in HTTP/2
  421. * we could consider sending RST_STREAM error. http_chunk_close() would
  422. * only handle case of streaming chunked to client */
  423. if (r->resp_send_chunked) {
  424. r->resp_send_chunked = 0;
  425. int rc = http_chunk_append_mem(r, mem, len); /*might append to tmpfile*/
  426. r->resp_send_chunked = 1;
  427. return rc;
  428. }
  429. return http_chunk_decode_append_data(r, mem, (off_t)len);
  430. }