@ -513,7 +513,7 @@ C<EV_WRITE> to C<POLLOUT | POLLERR | POLLHUP>.
=item C<EVBACKEND_EPOLL> (value 4, Linux)
Use the linux-specific epoll(7) interface (for both pre- and post-2.6.9
Use the Linux-specific epoll(7) interface (for both pre- and post-2.6.9
For few fds, this backend is a bit little slower than poll and select, but
@ -576,14 +576,14 @@ C<EVBACKEND_POLL>.
=item C<EVBACKEND_LINUXAIO> (value 64, Linux)
Use the linux-specific linux aio (I<not> C<< aio(7) >> but C<<
Use the Linux-specific Linux AIO (I<not> C<< aio(7) >> but C<<
io_submit(2) >>) event interface available in post-4.18 kernels (but libev
only tries to use it in 4.19+).
This is another linux trainwreck of an event interface.
This is another Linux train wreck of an event interface.
If this backend works for you (as of this writing, it was very
experimental), it is the best event interface available on linux and might
experimental), it is the best event interface available on Linux and might
be well worth enabling it - if it isn't available in your kernel this will
be detected and this backend will be skipped.
@ -591,24 +591,24 @@ This backend can batch oneshot requests and supports a user-space ring
buffer to receive events. It also doesn't suffer from most of the design
problems of epoll (such as not being able to remove event sources from
the epoll set), and generally sounds too good to be true. Because, this
being the linux kernel, of course it suffers from a whole new set of
being the Linux kernel, of course it suffers from a whole new set of
limitations, forcing you to fall back to epoll, inheriting all its design
For one, it is not easily embeddable (but probably could be done using
an event fd at some extra overhead). It also is subject to a system wide
limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no aio
limit that can be configured in F</proc/sys/fs/aio-max-nr>. If no AIO
requests are left, this backend will be skipped during initialisation, and
will switch to epoll when the loop is active.
Most problematic in practice, however, is that not all file descriptors
work with it. For example, in linux 5.1, tcp sockets, pipes, event fds,
files, F</dev/null> and a few others are supported, but ttys do not work
work with it. For example, in Linux 5.1, TCP sockets, pipes, event fds,
files, F</dev/null> and many others are supported, but ttys do not work
properly (a known bug that the kernel developers don't care about, see
L<https://lore.kernel.org/patchwork/patch/1047453/>), so this is not
(yet?) a generic event polling interface.
Overall, it seems the linux developers just don't want it to have a
Overall, it seems the Linux developers just don't want it to have a
generic event handling mechanism other than C<select> or C<poll>.
To work around all these problem, the current version of libev uses its
@ -639,7 +639,7 @@ kernel is more efficient (which says nothing about its actual speed, of
course). While stopping, setting and starting an I/O watcher does never
cause an extra system call as with C<EVBACKEND_EPOLL>, it still adds up to
two event changes per incident. Support for C<fork ()> is very bad (you
might have to leak fd's on fork, but it's more sane than epoll) and it
might have to leak fds on fork, but it's more sane than epoll) and it
drops fds silently in similarly hard-to-detect cases.
This backend usually performs well under most conditions.
@ -58,7 +58,7 @@
* but at least the fallback can be slow, because these are
* exceptional cases, right?
* d) hmm, you have to tell the kernel the maximum number of watchers
* you want to queue when initialiasing the aio context. but of
* you want to queue when initialising the aio context. but of
* course the real limit is magically calculated in the kernel, and
* is often higher then we asked for. so we just have to destroy
* the aio context and re-create it a bit larger if we hit the limit.
@ -70,18 +70,18 @@
* of event handling we have to switch to 100% epoll polling. and
* that better is as fast as normal epoll polling, so you practically
* have to use the normal epoll backend with all its quirks.
* f) end result of this trainwreck: it inherits all the disadvantages
* f) end result of this train wreck: it inherits all the disadvantages
* from epoll, while adding a number on its own. why even bother to use
* it? because if conditions are right and your fds are supported and you
* don't hit a limit, this backend is actually faster, doesn't gamble with
* your fds, batches watchers and events and doesn't require costly state
* recreates. well, until it does.
* g) all of this makes this backend use almost twice as much code as epoll.
* which in turn uses twice as much code as poll. and thats not counting
* which in turn uses twice as much code as poll. and that#s not counting
* the fact that this backend also depends on the epoll backend, making
* it three times as much code as poll, or kqueue.
* h) bleah. why can't linux just do kqueue. sure kqueue is ugly, but by now
* it's clear that whetaver linux comes up with is far, far, far worse.
* it's clear that whatever linux comes up with is far, far, far worse.
#include <sys/time.h> /* actually linux/time.h, but we must assume they are compatible */
@ -192,7 +192,7 @@ linuxaio_nr_events (EV_P)
/* we use out own wrapper structure in acse we ever want to do something "clever" */
/* we use out own wrapper structure in case we ever want to do something "clever" */
typedef struct aniocb
struct iocb io;
@ -205,7 +205,7 @@ linuxaio_array_needsize_iocbp (ANIOCBP *base, int offset, int count)
/* TODO: quite the overhead to allocate every iocb separately, maybe use our own alocator? */
/* TODO: quite the overhead to allocate every iocb separately, maybe use our own allocator? */
ANIOCBP iocb = (ANIOCBP)ev_malloc (sizeof (*iocb));
/* full zero initialise is probably not required at the moment, but
@ -240,7 +240,7 @@ linuxaio_modify (EV_P_ int fd, int oev, int nev)
if (iocb->io.aio_reqprio < 0)
/* we handed this fd over to epoll, so undo this first */
/* we do it manually becvause the optimisations on epoll_modfy won't do us any good */
/* we do it manually because the optimisations on epoll_modfy won't do us any good */
epoll_ctl (backend_fd, EPOLL_CTL_DEL, fd, 0);
iocb->io.aio_reqprio = 0;
@ -303,7 +303,7 @@ linuxaio_parse_events (EV_P_ struct io_event *ev, int nr)
/* get any events from ringbuffer, return true if any were handled */
/* get any events from ring buffer, return true if any were handled */
@ -399,7 +399,7 @@ linuxaio_poll (EV_P_ ev_tstamp timeout)
/* io_submit might return less than the requested number of iocbs */
/* this is, afaics, only because of errors, but we go by the book and use a loop, */
/* which allows us to pinpoint the errornous iocb */
/* which allows us to pinpoint the erroneous iocb */
for (submitted = 0; submitted < linuxaio_submitcnt; )
int res = evsys_io_submit (linuxaio_ctx, linuxaio_submitcnt - submitted, linuxaio_submits + submitted);
@ -423,7 +423,7 @@ linuxaio_poll (EV_P_ ev_tstamp timeout)
else if (errno == EAGAIN)
/* This happens when the ring buffer is full, or some other shit we
* dont' know and isn't documented. Most likely because we have too
* don't know and isn't documented. Most likely because we have too
* many requests and linux aio can't be assed to handle them.
* In this case, we try to allocate a larger ring buffer, freeing
* ours first. This might fail, in which case we have to fall back to 100%
@ -482,7 +482,7 @@ linuxaio_init (EV_P_ int flags)
/* would be great to have a nice test for IOCB_CMD_POLL instead */
/* also: test some semi-common fd types, such as files and ttys in recommended_backends */
/* 4.18 introduced IOCB_CMD_POLL, 4.19 made epoll work */
/* 4.18 introduced IOCB_CMD_POLL, 4.19 made epoll work, and we need that */
if (ev_linux_version () < 0x041300)