|
|
|
@ -1,4 +1,4 @@
|
|
|
|
|
.\" Automatically generated by Pod::Man 2.22 (Pod::Simple 3.07) |
|
|
|
|
.\" Automatically generated by Pod::Man 2.23 (Pod::Simple 3.14) |
|
|
|
|
.\" |
|
|
|
|
.\" Standard preamble: |
|
|
|
|
.\" ======================================================================== |
|
|
|
@ -124,7 +124,7 @@
|
|
|
|
|
.\" ======================================================================== |
|
|
|
|
.\" |
|
|
|
|
.IX Title "LIBEV 3" |
|
|
|
|
.TH LIBEV 3 "2011-02-16" "libev-4.04" "libev - high performance full featured event loop" |
|
|
|
|
.TH LIBEV 3 "2012-02-04" "libev-4.11" "libev - high performance full featured event loop" |
|
|
|
|
.\" For nroff, turn off justification. Always turn off hyphenation; it makes |
|
|
|
|
.\" way too many mistakes in technical documents. |
|
|
|
|
.if n .ad l |
|
|
|
@ -246,7 +246,7 @@ loop mechanism itself (\f(CW\*(C`ev_idle\*(C'\fR, \f(CW\*(C`ev_embed\*(C'\fR, \f
|
|
|
|
|
limited support for fork events (\f(CW\*(C`ev_fork\*(C'\fR). |
|
|
|
|
.PP |
|
|
|
|
It also is quite fast (see this |
|
|
|
|
<benchmark> comparing it to libevent |
|
|
|
|
benchmark <http://libev.schmorp.de/bench.html> comparing it to libevent |
|
|
|
|
for example). |
|
|
|
|
.SS "\s-1CONVENTIONS\s0" |
|
|
|
|
.IX Subsection "CONVENTIONS" |
|
|
|
@ -296,12 +296,18 @@ library in any way.
|
|
|
|
|
Returns the current time as libev would use it. Please note that the |
|
|
|
|
\&\f(CW\*(C`ev_now\*(C'\fR function is usually faster and also often returns the timestamp |
|
|
|
|
you actually want to know. Also interesting is the combination of |
|
|
|
|
\&\f(CW\*(C`ev_update_now\*(C'\fR and \f(CW\*(C`ev_now\*(C'\fR. |
|
|
|
|
\&\f(CW\*(C`ev_now_update\*(C'\fR and \f(CW\*(C`ev_now\*(C'\fR. |
|
|
|
|
.IP "ev_sleep (ev_tstamp interval)" 4 |
|
|
|
|
.IX Item "ev_sleep (ev_tstamp interval)" |
|
|
|
|
Sleep for the given interval: The current thread will be blocked until |
|
|
|
|
either it is interrupted or the given time interval has passed. Basically |
|
|
|
|
this is a sub-second-resolution \f(CW\*(C`sleep ()\*(C'\fR. |
|
|
|
|
Sleep for the given interval: The current thread will be blocked |
|
|
|
|
until either it is interrupted or the given time interval has |
|
|
|
|
passed (approximately \- it might return a bit earlier even if not |
|
|
|
|
interrupted). Returns immediately if \f(CW\*(C`interval <= 0\*(C'\fR. |
|
|
|
|
.Sp |
|
|
|
|
Basically this is a sub-second-resolution \f(CW\*(C`sleep ()\*(C'\fR. |
|
|
|
|
.Sp |
|
|
|
|
The range of the \f(CW\*(C`interval\*(C'\fR is limited \- libev only guarantees to work |
|
|
|
|
with sleep times of up to one day (\f(CW\*(C`interval <= 86400\*(C'\fR). |
|
|
|
|
.IP "int ev_version_major ()" 4 |
|
|
|
|
.IX Item "int ev_version_major ()" |
|
|
|
|
.PD 0 |
|
|
|
@ -555,7 +561,7 @@ example) that can't properly initialise their signal masks.
|
|
|
|
|
.el .IP "\f(CWEVFLAG_NOSIGMASK\fR" 4 |
|
|
|
|
.IX Item "EVFLAG_NOSIGMASK" |
|
|
|
|
When this flag is specified, then libev will avoid to modify the signal |
|
|
|
|
mask. Specifically, this means you ahve to make sure signals are unblocked |
|
|
|
|
mask. Specifically, this means you have to make sure signals are unblocked |
|
|
|
|
when you want to receive them. |
|
|
|
|
.Sp |
|
|
|
|
This behaviour is useful when you want to do your own signal handling, or |
|
|
|
@ -603,10 +609,10 @@ This backend maps \f(CW\*(C`EV_READ\*(C'\fR to \f(CW\*(C`POLLIN | POLLERR | POLL
|
|
|
|
|
Use the linux-specific \fIepoll\fR\|(7) interface (for both pre\- and post\-2.6.9 |
|
|
|
|
kernels). |
|
|
|
|
.Sp |
|
|
|
|
For few fds, this backend is a bit little slower than poll and select, |
|
|
|
|
but it scales phenomenally better. While poll and select usually scale |
|
|
|
|
like O(total_fds) where n is the total number of fds (or the highest fd), |
|
|
|
|
epoll scales either O(1) or O(active_fds). |
|
|
|
|
For few fds, this backend is a bit little slower than poll and select, but |
|
|
|
|
it scales phenomenally better. While poll and select usually scale like |
|
|
|
|
O(total_fds) where total_fds is the total number of fds (or the highest |
|
|
|
|
fd), epoll scales either O(1) or O(active_fds). |
|
|
|
|
.Sp |
|
|
|
|
The epoll mechanism deserves honorable mention as the most misdesigned |
|
|
|
|
of the more advanced event mechanisms: mere annoyances include silently |
|
|
|
@ -619,19 +625,22 @@ forks then \fIboth\fR parent and child process have to recreate the epoll
|
|
|
|
|
set, which can take considerable time (one syscall per file descriptor) |
|
|
|
|
and is of course hard to detect. |
|
|
|
|
.Sp |
|
|
|
|
Epoll is also notoriously buggy \- embedding epoll fds \fIshould\fR work, but |
|
|
|
|
of course \fIdoesn't\fR, and epoll just loves to report events for totally |
|
|
|
|
\&\fIdifferent\fR file descriptors (even already closed ones, so one cannot |
|
|
|
|
even remove them from the set) than registered in the set (especially |
|
|
|
|
on \s-1SMP\s0 systems). Libev tries to counter these spurious notifications by |
|
|
|
|
employing an additional generation counter and comparing that against the |
|
|
|
|
events to filter out spurious ones, recreating the set when required. Last |
|
|
|
|
Epoll is also notoriously buggy \- embedding epoll fds \fIshould\fR work, |
|
|
|
|
but of course \fIdoesn't\fR, and epoll just loves to report events for |
|
|
|
|
totally \fIdifferent\fR file descriptors (even already closed ones, so |
|
|
|
|
one cannot even remove them from the set) than registered in the set |
|
|
|
|
(especially on \s-1SMP\s0 systems). Libev tries to counter these spurious |
|
|
|
|
notifications by employing an additional generation counter and comparing |
|
|
|
|
that against the events to filter out spurious ones, recreating the set |
|
|
|
|
when required. Epoll also erroneously rounds down timeouts, but gives you |
|
|
|
|
no way to know when and by how much, so sometimes you have to busy-wait |
|
|
|
|
because epoll returns immediately despite a nonzero timeout. And last |
|
|
|
|
not least, it also refuses to work with some file descriptors which work |
|
|
|
|
perfectly fine with \f(CW\*(C`select\*(C'\fR (files, many character devices...). |
|
|
|
|
.Sp |
|
|
|
|
Epoll is truly the train wreck analog among event poll mechanisms, |
|
|
|
|
a frankenpoll, cobbled together in a hurry, no thought to design or |
|
|
|
|
interaction with others. |
|
|
|
|
Epoll is truly the train wreck among event poll mechanisms, a frankenpoll, |
|
|
|
|
cobbled together in a hurry, no thought to design or interaction with |
|
|
|
|
others. Oh, the pain, will it ever stop... |
|
|
|
|
.Sp |
|
|
|
|
While stopping, setting and starting an I/O watcher in the same iteration |
|
|
|
|
will result in some caching, there is still a system call per such |
|
|
|
@ -719,11 +728,11 @@ hacks).
|
|
|
|
|
.Sp |
|
|
|
|
On the negative side, the interface is \fIbizarre\fR \- so bizarre that |
|
|
|
|
even sun itself gets it wrong in their code examples: The event polling |
|
|
|
|
function sometimes returning events to the caller even though an error |
|
|
|
|
function sometimes returns events to the caller even though an error |
|
|
|
|
occurred, but with no indication whether it has done so or not (yes, it's |
|
|
|
|
even documented that way) \- deadly for edge-triggered interfaces where |
|
|
|
|
you absolutely have to know whether an event occurred or not because you |
|
|
|
|
have to re-arm the watcher. |
|
|
|
|
even documented that way) \- deadly for edge-triggered interfaces where you |
|
|
|
|
absolutely have to know whether an event occurred or not because you have |
|
|
|
|
to re-arm the watcher. |
|
|
|
|
.Sp |
|
|
|
|
Fortunately libev seems to be able to work around these idiocies. |
|
|
|
|
.Sp |
|
|
|
@ -944,7 +953,9 @@ with something not expressible using other libev watchers (i.e. "roll your
|
|
|
|
|
own \f(CW\*(C`ev_run\*(C'\fR"). However, a pair of \f(CW\*(C`ev_prepare\*(C'\fR/\f(CW\*(C`ev_check\*(C'\fR watchers is |
|
|
|
|
usually a better approach for this kind of thing. |
|
|
|
|
.Sp |
|
|
|
|
Here are the gory details of what \f(CW\*(C`ev_run\*(C'\fR does: |
|
|
|
|
Here are the gory details of what \f(CW\*(C`ev_run\*(C'\fR does (this is for your |
|
|
|
|
understanding, not a guarantee that things will work exactly like this in |
|
|
|
|
future versions): |
|
|
|
|
.Sp |
|
|
|
|
.Vb 10 |
|
|
|
|
\& \- Increment loop depth. |
|
|
|
@ -1069,10 +1080,11 @@ overhead for the actual polling but can deliver many events at once.
|
|
|
|
|
By setting a higher \fIio collect interval\fR you allow libev to spend more |
|
|
|
|
time collecting I/O events, so you can handle more events per iteration, |
|
|
|
|
at the cost of increasing latency. Timeouts (both \f(CW\*(C`ev_periodic\*(C'\fR and |
|
|
|
|
\&\f(CW\*(C`ev_timer\*(C'\fR) will be not affected. Setting this to a non-null value will |
|
|
|
|
\&\f(CW\*(C`ev_timer\*(C'\fR) will not be affected. Setting this to a non-null value will |
|
|
|
|
introduce an additional \f(CW\*(C`ev_sleep ()\*(C'\fR call into most loop iterations. The |
|
|
|
|
sleep time ensures that libev will not poll for I/O events more often then |
|
|
|
|
once per this interval, on average. |
|
|
|
|
once per this interval, on average (as long as the host time resolution is |
|
|
|
|
good enough). |
|
|
|
|
.Sp |
|
|
|
|
Likewise, by setting a higher \fItimeout collect interval\fR you allow libev |
|
|
|
|
to spend more time collecting timeouts, at the expense of increased |
|
|
|
@ -1134,7 +1146,7 @@ each call to a libev function.
|
|
|
|
|
.Sp |
|
|
|
|
However, \f(CW\*(C`ev_run\*(C'\fR can run an indefinite time, so it is not feasible |
|
|
|
|
to wait for it to return. One way around this is to wake up the event |
|
|
|
|
loop via \f(CW\*(C`ev_break\*(C'\fR and \f(CW\*(C`av_async_send\*(C'\fR, another way is to set these |
|
|
|
|
loop via \f(CW\*(C`ev_break\*(C'\fR and \f(CW\*(C`ev_async_send\*(C'\fR, another way is to set these |
|
|
|
|
\&\fIrelease\fR and \fIacquire\fR callbacks on the loop. |
|
|
|
|
.Sp |
|
|
|
|
When set, then \f(CW\*(C`release\*(C'\fR will be called just before the thread is |
|
|
|
@ -1491,7 +1503,7 @@ transition between them will be described in more detail \- and while these
|
|
|
|
|
rules might look complicated, they usually do \*(L"the right thing\*(R". |
|
|
|
|
.IP "initialiased" 4 |
|
|
|
|
.IX Item "initialiased" |
|
|
|
|
Before a watcher can be registered with the event looop it has to be |
|
|
|
|
Before a watcher can be registered with the event loop it has to be |
|
|
|
|
initialised. This can be done with a call to \f(CW\*(C`ev_TYPE_init\*(C'\fR, or calls to |
|
|
|
|
\&\f(CW\*(C`ev_init\*(C'\fR followed by the watcher-specific \f(CW\*(C`ev_TYPE_set\*(C'\fR function. |
|
|
|
|
.Sp |
|
|
|
@ -1873,10 +1885,11 @@ monotonic clock option helps a lot here).
|
|
|
|
|
.PP |
|
|
|
|
The callback is guaranteed to be invoked only \fIafter\fR its timeout has |
|
|
|
|
passed (not \fIat\fR, so on systems with very low-resolution clocks this |
|
|
|
|
might introduce a small delay). If multiple timers become ready during the |
|
|
|
|
same loop iteration then the ones with earlier time-out values are invoked |
|
|
|
|
before ones of the same priority with later time-out values (but this is |
|
|
|
|
no longer true when a callback calls \f(CW\*(C`ev_run\*(C'\fR recursively). |
|
|
|
|
might introduce a small delay, see \*(L"the special problem of being too |
|
|
|
|
early\*(R", below). If multiple timers become ready during the same loop |
|
|
|
|
iteration then the ones with earlier time-out values are invoked before |
|
|
|
|
ones of the same priority with later time-out values (but this is no |
|
|
|
|
longer true when a callback calls \f(CW\*(C`ev_run\*(C'\fR recursively). |
|
|
|
|
.PP |
|
|
|
|
\fIBe smart about timeouts\fR |
|
|
|
|
.IX Subsection "Be smart about timeouts" |
|
|
|
@ -1968,68 +1981,84 @@ In this case, it would be more efficient to leave the \f(CW\*(C`ev_timer\*(C'\fR
|
|
|
|
|
but remember the time of last activity, and check for a real timeout only |
|
|
|
|
within the callback: |
|
|
|
|
.Sp |
|
|
|
|
.Vb 1 |
|
|
|
|
.Vb 3 |
|
|
|
|
\& ev_tstamp timeout = 60.; |
|
|
|
|
\& ev_tstamp last_activity; // time of last activity |
|
|
|
|
\& ev_timer timer; |
|
|
|
|
\& |
|
|
|
|
\& static void |
|
|
|
|
\& callback (EV_P_ ev_timer *w, int revents) |
|
|
|
|
\& { |
|
|
|
|
\& ev_tstamp now = ev_now (EV_A); |
|
|
|
|
\& ev_tstamp timeout = last_activity + 60.; |
|
|
|
|
\& // calculate when the timeout would happen |
|
|
|
|
\& ev_tstamp after = last_activity \- ev_now (EV_A) + timeout; |
|
|
|
|
\& |
|
|
|
|
\& // if last_activity + 60. is older than now, we did time out |
|
|
|
|
\& if (timeout < now) |
|
|
|
|
\& // if negative, it means we the timeout already occured |
|
|
|
|
\& if (after < 0.) |
|
|
|
|
\& { |
|
|
|
|
\& // timeout occurred, take action |
|
|
|
|
\& } |
|
|
|
|
\& else |
|
|
|
|
\& { |
|
|
|
|
\& // callback was invoked, but there was some activity, re\-arm |
|
|
|
|
\& // the watcher to fire in last_activity + 60, which is |
|
|
|
|
\& // guaranteed to be in the future, so "again" is positive: |
|
|
|
|
\& w\->repeat = timeout \- now; |
|
|
|
|
\& ev_timer_again (EV_A_ w); |
|
|
|
|
\& // callback was invoked, but there was some recent |
|
|
|
|
\& // activity. simply restart the timer to time out |
|
|
|
|
\& // after "after" seconds, which is the earliest time |
|
|
|
|
\& // the timeout can occur. |
|
|
|
|
\& ev_timer_set (w, after, 0.); |
|
|
|
|
\& ev_timer_start (EV_A_ w); |
|
|
|
|
\& } |
|
|
|
|
\& } |
|
|
|
|
.Ve |
|
|
|
|
.Sp |
|
|
|
|
To summarise the callback: first calculate the real timeout (defined |
|
|
|
|
as \*(L"60 seconds after the last activity\*(R"), then check if that time has |
|
|
|
|
been reached, which means something \fIdid\fR, in fact, time out. Otherwise |
|
|
|
|
the callback was invoked too early (\f(CW\*(C`timeout\*(C'\fR is in the future), so |
|
|
|
|
re-schedule the timer to fire at that future time, to see if maybe we have |
|
|
|
|
a timeout then. |
|
|
|
|
To summarise the callback: first calculate in how many seconds the |
|
|
|
|
timeout will occur (by calculating the absolute time when it would occur, |
|
|
|
|
\&\f(CW\*(C`last_activity + timeout\*(C'\fR, and subtracting the current time, \f(CW\*(C`ev_now |
|
|
|
|
(EV_A)\*(C'\fR from that). |
|
|
|
|
.Sp |
|
|
|
|
Note how \f(CW\*(C`ev_timer_again\*(C'\fR is used, taking advantage of the |
|
|
|
|
\&\f(CW\*(C`ev_timer_again\*(C'\fR optimisation when the timer is already running. |
|
|
|
|
If this value is negative, then we are already past the timeout, i.e. we |
|
|
|
|
timed out, and need to do whatever is needed in this case. |
|
|
|
|
.Sp |
|
|
|
|
Otherwise, we now the earliest time at which the timeout would trigger, |
|
|
|
|
and simply start the timer with this timeout value. |
|
|
|
|
.Sp |
|
|
|
|
In other words, each time the callback is invoked it will check whether |
|
|
|
|
the timeout cocured. If not, it will simply reschedule itself to check |
|
|
|
|
again at the earliest time it could time out. Rinse. Repeat. |
|
|
|
|
.Sp |
|
|
|
|
This scheme causes more callback invocations (about one every 60 seconds |
|
|
|
|
minus half the average time between activity), but virtually no calls to |
|
|
|
|
libev to change the timeout. |
|
|
|
|
.Sp |
|
|
|
|
To start the timer, simply initialise the watcher and set \f(CW\*(C`last_activity\*(C'\fR |
|
|
|
|
to the current time (meaning we just have some activity :), then call the |
|
|
|
|
callback, which will \*(L"do the right thing\*(R" and start the timer: |
|
|
|
|
To start the machinery, simply initialise the watcher and set |
|
|
|
|
\&\f(CW\*(C`last_activity\*(C'\fR to the current time (meaning there was some activity just |
|
|
|
|
now), then call the callback, which will \*(L"do the right thing\*(R" and start |
|
|
|
|
the timer: |
|
|
|
|
.Sp |
|
|
|
|
.Vb 3 |
|
|
|
|
\& ev_init (timer, callback); |
|
|
|
|
\& last_activity = ev_now (loop); |
|
|
|
|
\& callback (loop, timer, EV_TIMER); |
|
|
|
|
\& last_activity = ev_now (EV_A); |
|
|
|
|
\& ev_init (&timer, callback); |
|
|
|
|
\& callback (EV_A_ &timer, 0); |
|
|
|
|
.Ve |
|
|
|
|
.Sp |
|
|
|
|
And when there is some activity, simply store the current time in |
|
|
|
|
When there is some activity, simply store the current time in |
|
|
|
|
\&\f(CW\*(C`last_activity\*(C'\fR, no libev calls at all: |
|
|
|
|
.Sp |
|
|
|
|
.Vb 1 |
|
|
|
|
\& last_activity = ev_now (loop); |
|
|
|
|
.Vb 2 |
|
|
|
|
\& if (activity detected) |
|
|
|
|
\& last_activity = ev_now (EV_A); |
|
|
|
|
.Ve |
|
|
|
|
.Sp |
|
|
|
|
When your timeout value changes, then the timeout can be changed by simply |
|
|
|
|
providing a new value, stopping the timer and calling the callback, which |
|
|
|
|
will agaion do the right thing (for example, time out immediately :). |
|
|
|
|
.Sp |
|
|
|
|
.Vb 3 |
|
|
|
|
\& timeout = new_value; |
|
|
|
|
\& ev_timer_stop (EV_A_ &timer); |
|
|
|
|
\& callback (EV_A_ &timer, 0); |
|
|
|
|
.Ve |
|
|
|
|
.Sp |
|
|
|
|
This technique is slightly more complex, but in most cases where the |
|
|
|
|
time-out is unlikely to be triggered, much more efficient. |
|
|
|
|
.Sp |
|
|
|
|
Changing the timeout is trivial as well (if it isn't hard-coded in the |
|
|
|
|
callback :) \- just change the timeout and invoke the callback, which will |
|
|
|
|
fix things for you. |
|
|
|
|
.IP "4. Wee, just use a double-linked list for your timeouts." 4 |
|
|
|
|
.IX Item "4. Wee, just use a double-linked list for your timeouts." |
|
|
|
|
If there is not one request, but many thousands (millions...), all |
|
|
|
@ -2063,11 +2092,49 @@ rather complicated, but extremely efficient, something that really pays
|
|
|
|
|
off after the first million or so of active timers, i.e. it's usually |
|
|
|
|
overkill :) |
|
|
|
|
.PP |
|
|
|
|
\fIThe special problem of being too early\fR |
|
|
|
|
.IX Subsection "The special problem of being too early" |
|
|
|
|
.PP |
|
|
|
|
If you ask a timer to call your callback after three seconds, then |
|
|
|
|
you expect it to be invoked after three seconds \- but of course, this |
|
|
|
|
cannot be guaranteed to infinite precision. Less obviously, it cannot be |
|
|
|
|
guaranteed to any precision by libev \- imagine somebody suspending the |
|
|
|
|
process with a \s-1STOP\s0 signal for a few hours for example. |
|
|
|
|
.PP |
|
|
|
|
So, libev tries to invoke your callback as soon as possible \fIafter\fR the |
|
|
|
|
delay has occurred, but cannot guarantee this. |
|
|
|
|
.PP |
|
|
|
|
A less obvious failure mode is calling your callback too early: many event |
|
|
|
|
loops compare timestamps with a \*(L"elapsed delay >= requested delay\*(R", but |
|
|
|
|
this can cause your callback to be invoked much earlier than you would |
|
|
|
|
expect. |
|
|
|
|
.PP |
|
|
|
|
To see why, imagine a system with a clock that only offers full second |
|
|
|
|
resolution (think windows if you can't come up with a broken enough \s-1OS\s0 |
|
|
|
|
yourself). If you schedule a one-second timer at the time 500.9, then the |
|
|
|
|
event loop will schedule your timeout to elapse at a system time of 500 |
|
|
|
|
(500.9 truncated to the resolution) + 1, or 501. |
|
|
|
|
.PP |
|
|
|
|
If an event library looks at the timeout 0.1s later, it will see \*(L"501 >= |
|
|
|
|
501\*(R" and invoke the callback 0.1s after it was started, even though a |
|
|
|
|
one-second delay was requested \- this is being \*(L"too early\*(R", despite best |
|
|
|
|
intentions. |
|
|
|
|
.PP |
|
|
|
|
This is the reason why libev will never invoke the callback if the elapsed |
|
|
|
|
delay equals the requested delay, but only when the elapsed delay is |
|
|
|
|
larger than the requested delay. In the example above, libev would only invoke |
|
|
|
|
the callback at system time 502, or 1.1s after the timer was started. |
|
|
|
|
.PP |
|
|
|
|
So, while libev cannot guarantee that your callback will be invoked |
|
|
|
|
exactly when requested, it \fIcan\fR and \fIdoes\fR guarantee that the requested |
|
|
|
|
delay has actually elapsed, or in other words, it always errs on the \*(L"too |
|
|
|
|
late\*(R" side of things. |
|
|
|
|
.PP |
|
|
|
|
\fIThe special problem of time updates\fR |
|
|
|
|
.IX Subsection "The special problem of time updates" |
|
|
|
|
.PP |
|
|
|
|
Establishing the current time is a costly operation (it usually takes at |
|
|
|
|
least two system calls): \s-1EV\s0 therefore updates its idea of the current |
|
|
|
|
Establishing the current time is a costly operation (it usually takes |
|
|
|
|
at least one system call): \s-1EV\s0 therefore updates its idea of the current |
|
|
|
|
time only before and after \f(CW\*(C`ev_run\*(C'\fR collects new events, which causes a |
|
|
|
|
growing difference between \f(CW\*(C`ev_now ()\*(C'\fR and \f(CW\*(C`ev_time ()\*(C'\fR when handling |
|
|
|
|
lots of events in one iteration. |
|
|
|
@ -2086,6 +2153,40 @@ If the event loop is suspended for a long time, you can also force an
|
|
|
|
|
update of the time returned by \f(CW\*(C`ev_now ()\*(C'\fR by calling \f(CW\*(C`ev_now_update |
|
|
|
|
()\*(C'\fR. |
|
|
|
|
.PP |
|
|
|
|
\fIThe special problem of unsynchronised clocks\fR |
|
|
|
|
.IX Subsection "The special problem of unsynchronised clocks" |
|
|
|
|
.PP |
|
|
|
|
Modern systems have a variety of clocks \- libev itself uses the normal |
|
|
|
|
\&\*(L"wall clock\*(R" clock and, if available, the monotonic clock (to avoid time |
|
|
|
|
jumps). |
|
|
|
|
.PP |
|
|
|
|
Neither of these clocks is synchronised with each other or any other clock |
|
|
|
|
on the system, so \f(CW\*(C`ev_time ()\*(C'\fR might return a considerably different time |
|
|
|
|
than \f(CW\*(C`gettimeofday ()\*(C'\fR or \f(CW\*(C`time ()\*(C'\fR. On a GNU/Linux system, for example, |
|
|
|
|
a call to \f(CW\*(C`gettimeofday\*(C'\fR might return a second count that is one higher |
|
|
|
|
than a directly following call to \f(CW\*(C`time\*(C'\fR. |
|
|
|
|
.PP |
|
|
|
|
The moral of this is to only compare libev-related timestamps with |
|
|
|
|
\&\f(CW\*(C`ev_time ()\*(C'\fR and \f(CW\*(C`ev_now ()\*(C'\fR, at least if you want better precision than |
|
|
|
|
a second or so. |
|
|
|
|
.PP |
|
|
|
|
One more problem arises due to this lack of synchronisation: if libev uses |
|
|
|
|
the system monotonic clock and you compare timestamps from \f(CW\*(C`ev_time\*(C'\fR |
|
|
|
|
or \f(CW\*(C`ev_now\*(C'\fR from when you started your timer and when your callback is |
|
|
|
|
invoked, you will find that sometimes the callback is a bit \*(L"early\*(R". |
|
|
|
|
.PP |
|
|
|
|
This is because \f(CW\*(C`ev_timer\*(C'\fRs work in real time, not wall clock time, so |
|
|
|
|
libev makes sure your callback is not invoked before the delay happened, |
|
|
|
|
\&\fImeasured according to the real time\fR, not the system clock. |
|
|
|
|
.PP |
|
|
|
|
If your timeouts are based on a physical timescale (e.g. \*(L"time out this |
|
|
|
|
connection after 100 seconds\*(R") then this shouldn't bother you as it is |
|
|
|
|
exactly the right behaviour. |
|
|
|
|
.PP |
|
|
|
|
If you want to compare wall clock/system timestamps to your timers, then |
|
|
|
|
you need to use \f(CW\*(C`ev_periodic\*(C'\fRs, as these are based on the wall clock |
|
|
|
|
time, where your comparisons will always generate correct results. |
|
|
|
|
.PP |
|
|
|
|
\fIThe special problems of suspended animation\fR |
|
|
|
|
.IX Subsection "The special problems of suspended animation" |
|
|
|
|
.PP |
|
|
|
@ -2138,18 +2239,28 @@ keep up with the timer (because it takes longer than those 10 seconds to
|
|
|
|
|
do stuff) the timer will not fire more than once per event loop iteration. |
|
|
|
|
.IP "ev_timer_again (loop, ev_timer *)" 4 |
|
|
|
|
.IX Item "ev_timer_again (loop, ev_timer *)" |
|
|
|
|
This will act as if the timer timed out and restart it again if it is |
|
|
|
|
repeating. The exact semantics are: |
|
|
|
|
.Sp |
|
|
|
|
If the timer is pending, its pending status is cleared. |
|
|
|
|
.Sp |
|
|
|
|
If the timer is started but non-repeating, stop it (as if it timed out). |
|
|
|
|
This will act as if the timer timed out, and restarts it again if it is |
|
|
|
|
repeating. It basically works like calling \f(CW\*(C`ev_timer_stop\*(C'\fR, updating the |
|
|
|
|
timeout to the \f(CW\*(C`repeat\*(C'\fR value and calling \f(CW\*(C`ev_timer_start\*(C'\fR. |
|
|
|
|
.Sp |
|
|
|
|
If the timer is repeating, either start it if necessary (with the |
|
|
|
|
\&\f(CW\*(C`repeat\*(C'\fR value), or reset the running timer to the \f(CW\*(C`repeat\*(C'\fR value. |
|
|
|
|
The exact semantics are as in the following rules, all of which will be |
|
|
|
|
applied to the watcher: |
|
|
|
|
.RS 4 |
|
|
|
|
.IP "If the timer is pending, the pending status is always cleared." 4 |
|
|
|
|
.IX Item "If the timer is pending, the pending status is always cleared." |
|
|
|
|
.PD 0 |
|
|
|
|
.IP "If the timer is started but non-repeating, stop it (as if it timed out, without invoking it)." 4 |
|
|
|
|
.IX Item "If the timer is started but non-repeating, stop it (as if it timed out, without invoking it)." |
|
|
|
|
.ie n .IP "If the timer is repeating, make the ""repeat"" value the new timeout and start the timer, if necessary." 4 |
|
|
|
|
.el .IP "If the timer is repeating, make the \f(CWrepeat\fR value the new timeout and start the timer, if necessary." 4 |
|
|
|
|
.IX Item "If the timer is repeating, make the repeat value the new timeout and start the timer, if necessary." |
|
|
|
|
.RE |
|
|
|
|
.RS 4 |
|
|
|
|
.PD |
|
|
|
|
.Sp |
|
|
|
|
This sounds a bit complicated, see \*(L"Be smart about timeouts\*(R", above, for a |
|
|
|
|
usage example. |
|
|
|
|
.RE |
|
|
|
|
.IP "ev_tstamp ev_timer_remaining (loop, ev_timer *)" 4 |
|
|
|
|
.IX Item "ev_tstamp ev_timer_remaining (loop, ev_timer *)" |
|
|
|
|
Returns the remaining time until a timer fires. If the timer is active, |
|
|
|
@ -2279,9 +2390,12 @@ Another way to think about it (for the mathematically inclined) is that
|
|
|
|
|
\&\f(CW\*(C`ev_periodic\*(C'\fR will try to run the callback in this mode at the next possible |
|
|
|
|
time where \f(CW\*(C`time = offset (mod interval)\*(C'\fR, regardless of any time jumps. |
|
|
|
|
.Sp |
|
|
|
|
For numerical stability it is preferable that the \f(CW\*(C`offset\*(C'\fR value is near |
|
|
|
|
\&\f(CW\*(C`ev_now ()\*(C'\fR (the current time), but there is no range requirement for |
|
|
|
|
this value, and in fact is often specified as zero. |
|
|
|
|
The \f(CW\*(C`interval\*(C'\fR \fI\s-1MUST\s0\fR be positive, and for numerical stability, the |
|
|
|
|
interval value should be higher than \f(CW\*(C`1/8192\*(C'\fR (which is around 100 |
|
|
|
|
microseconds) and \f(CW\*(C`offset\*(C'\fR should be higher than \f(CW0\fR and should have |
|
|
|
|
at most a similar magnitude as the current time (say, within a factor of |
|
|
|
|
ten). Typical values for offset are, in fact, \f(CW0\fR or something between |
|
|
|
|
\&\f(CW0\fR and \f(CW\*(C`interval\*(C'\fR, which is also the recommended range. |
|
|
|
|
.Sp |
|
|
|
|
Note also that there is an upper limit to how often a timer can fire (\s-1CPU\s0 |
|
|
|
|
speed for example), so if \f(CW\*(C`interval\*(C'\fR is very small then timing stability |
|
|
|
@ -3333,9 +3447,6 @@ of \*(L"global async watchers\*(R" by using a watcher on an otherwise unused
|
|
|
|
|
signal, and \f(CW\*(C`ev_feed_signal\*(C'\fR to signal this watcher from another thread, |
|
|
|
|
even without knowing which loop owns the signal. |
|
|
|
|
.PP |
|
|
|
|
Unlike \f(CW\*(C`ev_signal\*(C'\fR watchers, \f(CW\*(C`ev_async\*(C'\fR works with any event loop, not |
|
|
|
|
just the default loop. |
|
|
|
|
.PP |
|
|
|
|
\fIQueueing\fR |
|
|
|
|
.IX Subsection "Queueing" |
|
|
|
|
.PP |
|
|
|
@ -3439,13 +3550,16 @@ signal or similar contexts (see the discussion of \f(CW\*(C`EV_ATOMIC_T\*(C'\fR
|
|
|
|
|
embedding section below on what exactly this means). |
|
|
|
|
.Sp |
|
|
|
|
Note that, as with other watchers in libev, multiple events might get |
|
|
|
|
compressed into a single callback invocation (another way to look at this |
|
|
|
|
is that \f(CW\*(C`ev_async\*(C'\fR watchers are level-triggered, set on \f(CW\*(C`ev_async_send\*(C'\fR, |
|
|
|
|
reset when the event loop detects that). |
|
|
|
|
.Sp |
|
|
|
|
This call incurs the overhead of a system call only once per event loop |
|
|
|
|
iteration, so while the overhead might be noticeable, it doesn't apply to |
|
|
|
|
repeated calls to \f(CW\*(C`ev_async_send\*(C'\fR for the same event loop. |
|
|
|
|
compressed into a single callback invocation (another way to look at |
|
|
|
|
this is that \f(CW\*(C`ev_async\*(C'\fR watchers are level-triggered: they are set on |
|
|
|
|
\&\f(CW\*(C`ev_async_send\*(C'\fR, reset when the event loop detects that). |
|
|
|
|
.Sp |
|
|
|
|
This call incurs the overhead of at most one extra system call per event |
|
|
|
|
loop iteration, if the event loop is blocked, and no syscall at all if |
|
|
|
|
the event loop (or your program) is processing events. That means that |
|
|
|
|
repeated calls are basically free (there is no need to avoid calls for |
|
|
|
|
performance reasons) and that the overhead becomes smaller (typically |
|
|
|
|
zero) under load. |
|
|
|
|
.IP "bool = ev_async_pending (ev_async *)" 4 |
|
|
|
|
.IX Item "bool = ev_async_pending (ev_async *)" |
|
|
|
|
Returns a non-zero value when \f(CW\*(C`ev_async_send\*(C'\fR has been called on the |
|
|
|
@ -3503,7 +3617,7 @@ Example: wait up to ten seconds for data to appear on \s-1STDIN_FILENO\s0.
|
|
|
|
|
.IP "ev_feed_fd_event (loop, int fd, int revents)" 4 |
|
|
|
|
.IX Item "ev_feed_fd_event (loop, int fd, int revents)" |
|
|
|
|
Feed an event on the given fd, as if a file descriptor backend detected |
|
|
|
|
the given events it. |
|
|
|
|
the given events. |
|
|
|
|
.IP "ev_feed_signal_event (loop, int signum)" 4 |
|
|
|
|
.IX Item "ev_feed_signal_event (loop, int signum)" |
|
|
|
|
Feed an event as if the given signal occurred. See also \f(CW\*(C`ev_feed_signal\*(C'\fR, |
|
|
|
@ -3587,6 +3701,49 @@ real programmers):
|
|
|
|
|
\& (((char *)w) \- offsetof (struct my_biggy, t2)); |
|
|
|
|
\& } |
|
|
|
|
.Ve |
|
|
|
|
.SS "\s-1AVOIDING\s0 \s-1FINISHING\s0 \s-1BEFORE\s0 \s-1RETURNING\s0" |
|
|
|
|
.IX Subsection "AVOIDING FINISHING BEFORE RETURNING" |
|
|
|
|
Often you have structures like this in event-based programs: |
|
|
|
|
.PP |
|
|
|
|
.Vb 4 |
|
|
|
|
\& callback () |
|
|
|
|
\& { |
|
|
|
|
\& free (request); |
|
|
|
|
\& } |
|
|
|
|
\& |
|
|
|
|
\& request = start_new_request (..., callback); |
|
|
|
|
.Ve |
|
|
|
|
.PP |
|
|
|
|
The intent is to start some \*(L"lengthy\*(R" operation. The \f(CW\*(C`request\*(C'\fR could be |
|
|
|
|
used to cancel the operation, or do other things with it. |
|
|
|
|
.PP |
|
|
|
|
It's not uncommon to have code paths in \f(CW\*(C`start_new_request\*(C'\fR that |
|
|
|
|
immediately invoke the callback, for example, to report errors. Or you add |
|
|
|
|
some caching layer that finds that it can skip the lengthy aspects of the |
|
|
|
|
operation and simply invoke the callback with the result. |
|
|
|
|
.PP |
|
|
|
|
The problem here is that this will happen \fIbefore\fR \f(CW\*(C`start_new_request\*(C'\fR |
|
|
|
|
has returned, so \f(CW\*(C`request\*(C'\fR is not set. |
|
|
|
|
.PP |
|
|
|
|
Even if you pass the request by some safer means to the callback, you |
|
|
|
|
might want to do something to the request after starting it, such as |
|
|
|
|
canceling it, which probably isn't working so well when the callback has |
|
|
|
|
already been invoked. |
|
|
|
|
.PP |
|
|
|
|
A common way around all these issues is to make sure that |
|
|
|
|
\&\f(CW\*(C`start_new_request\*(C'\fR \fIalways\fR returns before the callback is invoked. If |
|
|
|
|
\&\f(CW\*(C`start_new_request\*(C'\fR immediately knows the result, it can artificially |
|
|
|
|
delay invoking the callback by e.g. using a \f(CW\*(C`prepare\*(C'\fR or \f(CW\*(C`idle\*(C'\fR watcher |
|
|
|
|
for example, or more sneakily, by reusing an existing (stopped) watcher |
|
|
|
|
and pushing it into the pending queue: |
|
|
|
|
.PP |
|
|
|
|
.Vb 2 |
|
|
|
|
\& ev_set_cb (watcher, callback); |
|
|
|
|
\& ev_feed_event (EV_A_ watcher, 0); |
|
|
|
|
.Ve |
|
|
|
|
.PP |
|
|
|
|
This way, \f(CW\*(C`start_new_request\*(C'\fR can safely return before the callback is |
|
|
|
|
invoked, while not delaying callback invocation too much. |
|
|
|
|
.SS "\s-1MODEL/NESTED\s0 \s-1EVENT\s0 \s-1LOOP\s0 \s-1INVOCATIONS\s0 \s-1AND\s0 \s-1EXIT\s0 \s-1CONDITIONS\s0" |
|
|
|
|
.IX Subsection "MODEL/NESTED EVENT LOOP INVOCATIONS AND EXIT CONDITIONS" |
|
|
|
|
Often (especially in \s-1GUI\s0 toolkits) there are places where you have |
|
|
|
@ -3610,7 +3767,7 @@ triggered, using \f(CW\*(C`EVRUN_ONCE\*(C'\fR:
|
|
|
|
|
\& while (!exit_main_loop) |
|
|
|
|
\& ev_run (EV_DEFAULT_ EVRUN_ONCE); |
|
|
|
|
\& |
|
|
|
|
\& // in a model watcher |
|
|
|
|
\& // in a modal watcher |
|
|
|
|
\& int exit_nested_loop = 0; |
|
|
|
|
\& |
|
|
|
|
\& while (!exit_nested_loop) |
|
|
|
@ -3819,7 +3976,7 @@ called):
|
|
|
|
|
.PP |
|
|
|
|
That basically suspends the coroutine inside \f(CW\*(C`wait_for_event\*(C'\fR and |
|
|
|
|
continues the libev coroutine, which, when appropriate, switches back to |
|
|
|
|
this or any other coroutine. I am sure if you sue this your own :) |
|
|
|
|
this or any other coroutine. |
|
|
|
|
.PP |
|
|
|
|
You can do similar tricks if you have, say, threads with an event queue \- |
|
|
|
|
instead of storing a coroutine, you store the queue object and instead of |
|
|
|
@ -3917,7 +4074,7 @@ Aliases to the same types/functions as with the \f(CW\*(C`ev_\*(C'\fR prefix.
|
|
|
|
|
For each \f(CW\*(C`ev_TYPE\*(C'\fR watcher in \fIev.h\fR there is a corresponding class of |
|
|
|
|
the same name in the \f(CW\*(C`ev\*(C'\fR namespace, with the exception of \f(CW\*(C`ev_signal\*(C'\fR |
|
|
|
|
which is called \f(CW\*(C`ev::sig\*(C'\fR to avoid clashes with the \f(CW\*(C`signal\*(C'\fR macro |
|
|
|
|
defines by many implementations. |
|
|
|
|
defined by many implementations. |
|
|
|
|
.Sp |
|
|
|
|
All of those classes have these methods: |
|
|
|
|
.RS 4 |
|
|
|
@ -4058,7 +4215,7 @@ watchers in the constructor.
|
|
|
|
|
\& class myclass |
|
|
|
|
\& { |
|
|
|
|
\& ev::io io ; void io_cb (ev::io &w, int revents); |
|
|
|
|
\& ev::io2 io2 ; void io2_cb (ev::io &w, int revents); |
|
|
|
|
\& ev::io io2 ; void io2_cb (ev::io &w, int revents); |
|
|
|
|
\& ev::idle idle; void idle_cb (ev::idle &w, int revents); |
|
|
|
|
\& |
|
|
|
|
\& myclass (int fd) |
|
|
|
@ -4107,20 +4264,20 @@ makes rev work even on mingw.
|
|
|
|
|
.IP "Haskell" 4 |
|
|
|
|
.IX Item "Haskell" |
|
|
|
|
A haskell binding to libev is available at |
|
|
|
|
<http://hackage.haskell.org/cgi\-bin/hackage\-scripts/package/hlibev>. |
|
|
|
|
http://hackage.haskell.org/cgi\-bin/hackage\-scripts/package/hlibev <http://hackage.haskell.org/cgi-bin/hackage-scripts/package/hlibev>. |
|
|
|
|
.IP "D" 4 |
|
|
|
|
.IX Item "D" |
|
|
|
|
Leandro Lucarella has written a D language binding (\fIev.d\fR) for libev, to |
|
|
|
|
be found at <http://proj.llucax.com.ar/wiki/evd>. |
|
|
|
|
be found at <http://www.llucax.com.ar/proj/ev.d/index.html>. |
|
|
|
|
.IP "Ocaml" 4 |
|
|
|
|
.IX Item "Ocaml" |
|
|
|
|
Erkki Seppala has written Ocaml bindings for libev, to be found at |
|
|
|
|
<http://modeemi.cs.tut.fi/~flux/software/ocaml\-ev/>. |
|
|
|
|
http://modeemi.cs.tut.fi/~flux/software/ocaml\-ev/ <http://modeemi.cs.tut.fi/~flux/software/ocaml-ev/>. |
|
|
|
|
.IP "Lua" 4 |
|
|
|
|
.IX Item "Lua" |
|
|
|
|
Brian Maher has written a partial interface to libev for lua (at the |
|
|
|
|
time of this writing, only \f(CW\*(C`ev_io\*(C'\fR and \f(CW\*(C`ev_timer\*(C'\fR), to be found at |
|
|
|
|
<http://github.com/brimworks/lua\-ev>. |
|
|
|
|
http://github.com/brimworks/lua\-ev <http://github.com/brimworks/lua-ev>. |
|
|
|
|
.SH "MACRO MAGIC" |
|
|
|
|
.IX Header "MACRO MAGIC" |
|
|
|
|
Libev can be compiled with a variety of options, the most fundamental |
|
|
|
@ -4165,7 +4322,11 @@ suitable for use with \f(CW\*(C`EV_A\*(C'\fR.
|
|
|
|
|
.el .IP "\f(CWEV_DEFAULT\fR, \f(CWEV_DEFAULT_\fR" 4 |
|
|
|
|
.IX Item "EV_DEFAULT, EV_DEFAULT_" |
|
|
|
|
Similar to the other two macros, this gives you the value of the default |
|
|
|
|
loop, if multiple loops are supported (\*(L"ev loop default\*(R"). |
|
|
|
|
loop, if multiple loops are supported (\*(L"ev loop default\*(R"). The default loop |
|
|
|
|
will be initialised if it isn't already initialised. |
|
|
|
|
.Sp |
|
|
|
|
For non-multiplicity builds, these macros do nothing, so you always have |
|
|
|
|
to initialise the loop somewhere. |
|
|
|
|
.ie n .IP """EV_DEFAULT_UC"", ""EV_DEFAULT_UC_""" 4 |
|
|
|
|
.el .IP "\f(CWEV_DEFAULT_UC\fR, \f(CWEV_DEFAULT_UC_\fR" 4 |
|
|
|
|
.IX Item "EV_DEFAULT_UC, EV_DEFAULT_UC_" |
|
|
|
@ -4330,6 +4491,14 @@ supported). It will also not define any of the structs usually found in
|
|
|
|
|
.Sp |
|
|
|
|
In standalone mode, libev will still try to automatically deduce the |
|
|
|
|
configuration, but has to be more conservative. |
|
|
|
|
.IP "\s-1EV_USE_FLOOR\s0" 4 |
|
|
|
|
.IX Item "EV_USE_FLOOR" |
|
|
|
|
If defined to be \f(CW1\fR, libev will use the \f(CW\*(C`floor ()\*(C'\fR function for its |
|
|
|
|
periodic reschedule calculations, otherwise libev will fall back on a |
|
|
|
|
portable (slower) implementation. If you enable this, you usually have to |
|
|
|
|
link against libm or something equivalent. Enabling this when the \f(CW\*(C`floor\*(C'\fR |
|
|
|
|
function is not available will fail, so the safe default is to not enable |
|
|
|
|
this. |
|
|
|
|
.IP "\s-1EV_USE_MONOTONIC\s0" 4 |
|
|
|
|
.IX Item "EV_USE_MONOTONIC" |
|
|
|
|
If defined to be \f(CW1\fR, libev will try to detect the availability of the |
|
|
|
@ -4451,16 +4620,30 @@ If defined to be \f(CW1\fR, libev will compile in support for the Linux inotify
|
|
|
|
|
interface to speed up \f(CW\*(C`ev_stat\*(C'\fR watchers. Its actual availability will |
|
|
|
|
be detected at runtime. If undefined, it will be enabled if the headers |
|
|
|
|
indicate GNU/Linux + Glibc 2.4 or newer, otherwise disabled. |
|
|
|
|
.IP "\s-1EV_NO_SMP\s0" 4 |
|
|
|
|
.IX Item "EV_NO_SMP" |
|
|
|
|
If defined to be \f(CW1\fR, libev will assume that memory is always coherent |
|
|
|
|
between threads, that is, threads can be used, but threads never run on |
|
|
|
|
different cpus (or different cpu cores). This reduces dependencies |
|
|
|
|
and makes libev faster. |
|
|
|
|
.IP "\s-1EV_NO_THREADS\s0" 4 |
|
|
|
|
.IX Item "EV_NO_THREADS" |
|
|
|
|
If defined to be \f(CW1\fR, libev will assume that it will never be called |
|
|
|
|
from different threads, which is a stronger assumption than \f(CW\*(C`EV_NO_SMP\*(C'\fR, |
|
|
|
|
above. This reduces dependencies and makes libev faster. |
|
|
|
|
.IP "\s-1EV_ATOMIC_T\s0" 4 |
|
|
|
|
.IX Item "EV_ATOMIC_T" |
|
|
|
|
Libev requires an integer type (suitable for storing \f(CW0\fR or \f(CW1\fR) whose |
|
|
|
|
access is atomic with respect to other threads or signal contexts. No such |
|
|
|
|
type is easily found in the C language, so you can provide your own type |
|
|
|
|
that you know is safe for your purposes. It is used both for signal handler \*(L"locking\*(R" |
|
|
|
|
as well as for signal and thread safety in \f(CW\*(C`ev_async\*(C'\fR watchers. |
|
|
|
|
access is atomic and serialised with respect to other threads or signal |
|
|
|
|
contexts. No such type is easily found in the C language, so you can |
|
|
|
|
provide your own type that you know is safe for your purposes. It is used |
|
|
|
|
both for signal handler \*(L"locking\*(R" as well as for signal and thread safety |
|
|
|
|
in \f(CW\*(C`ev_async\*(C'\fR watchers. |
|
|
|
|
.Sp |
|
|
|
|
In the absence of this define, libev will use \f(CW\*(C`sig_atomic_t volatile\*(C'\fR |
|
|
|
|
(from \fIsignal.h\fR), which is usually good enough on most platforms. |
|
|
|
|
(from \fIsignal.h\fR), which is usually good enough on most platforms, |
|
|
|
|
although strictly speaking using a type that also implies a memory fence |
|
|
|
|
is required. |
|
|
|
|
.IP "\s-1EV_H\s0 (h)" 4 |
|
|
|
|
.IX Item "EV_H (h)" |
|
|
|
|
The name of the \fIev.h\fR header file used to include it. The default if |
|
|
|
@ -4488,6 +4671,10 @@ will have the \f(CW\*(C`struct ev_loop *\*(C'\fR as first argument, and you can
|
|
|
|
|
additional independent event loops. Otherwise there will be no support |
|
|
|
|
for multiple event loops and there is no first event loop pointer |
|
|
|
|
argument. Instead, all functions act on the single default loop. |
|
|
|
|
.Sp |
|
|
|
|
Note that \f(CW\*(C`EV_DEFAULT\*(C'\fR and \f(CW\*(C`EV_DEFAULT_\*(C'\fR will no longer provide a |
|
|
|
|
default loop when multiplicity is switched off \- you always have to |
|
|
|
|
initialise the loop manually in this case. |
|
|
|
|
.IP "\s-1EV_MINPRI\s0" 4 |
|
|
|
|
.IX Item "EV_MINPRI" |
|
|
|
|
.PD 0 |
|
|
|
@ -4594,6 +4781,19 @@ when you use \f(CW\*(C`\-Wl,\-\-gc\-sections \-ffunction\-sections\*(C'\fR) func
|
|
|
|
|
your program might be left out as well \- a binary starting a timer and an |
|
|
|
|
I/O watcher then might come out at only 5Kb. |
|
|
|
|
.RE |
|
|
|
|
.IP "\s-1EV_API_STATIC\s0" 4 |
|
|
|
|
.IX Item "EV_API_STATIC" |
|
|
|
|
If this symbol is defined (by default it is not), then all identifiers |
|
|
|
|
will have static linkage. This means that libev will not export any |
|
|
|
|
identifiers, and you cannot link against libev anymore. This can be useful |
|
|
|
|
when you embed libev, only want to use libev functions in a single file, |
|
|
|
|
and do not want its identifiers to be visible. |
|
|
|
|
.Sp |
|
|
|
|
To use this, define \f(CW\*(C`EV_API_STATIC\*(C'\fR and include \fIev.c\fR in the file that |
|
|
|
|
wants to use libev. |
|
|
|
|
.Sp |
|
|
|
|
This option only works when libev is compiled with a C compiler, as \*(C+ |
|
|
|
|
doesn't support the required declaration syntax. |
|
|
|
|
.IP "\s-1EV_AVOID_STDIO\s0" 4 |
|
|
|
|
.IX Item "EV_AVOID_STDIO" |
|
|
|
|
If this is set to \f(CW1\fR at compiletime, then libev will avoid using stdio |
|
|
|
@ -4980,7 +5180,7 @@ model. Libev still offers limited functionality on this platform in
|
|
|
|
|
the form of the \f(CW\*(C`EVBACKEND_SELECT\*(C'\fR backend, and only supports socket |
|
|
|
|
descriptors. This only applies when using Win32 natively, not when using |
|
|
|
|
e.g. cygwin. Actually, it only applies to the microsofts own compilers, |
|
|
|
|
as every compielr comes with a slightly differently broken/incompatible |
|
|
|
|
as every compiler comes with a slightly differently broken/incompatible |
|
|
|
|
environment. |
|
|
|
|
.PP |
|
|
|
|
Lifting these limitations would basically require the full |
|
|
|
@ -5126,8 +5326,12 @@ The type \f(CW\*(C`double\*(C'\fR is used to represent timestamps. It is require
|
|
|
|
|
have at least 51 bits of mantissa (and 9 bits of exponent), which is |
|
|
|
|
good enough for at least into the year 4000 with millisecond accuracy |
|
|
|
|
(the design goal for libev). This requirement is overfulfilled by |
|
|
|
|
implementations using \s-1IEEE\s0 754, which is basically all existing ones. With |
|
|
|
|
\&\s-1IEEE\s0 754 doubles, you get microsecond accuracy until at least 2200. |
|
|
|
|
implementations using \s-1IEEE\s0 754, which is basically all existing ones. |
|
|
|
|
.Sp |
|
|
|
|
With \s-1IEEE\s0 754 doubles, you get microsecond accuracy until at least the |
|
|
|
|
year 2255 (and millisecond accuracy till the year 287396 \- by then, libev |
|
|
|
|
is either obsolete or somebody patched it to use \f(CW\*(C`long double\*(C'\fR or |
|
|
|
|
something like that, just kidding). |
|
|
|
|
.PP |
|
|
|
|
If you know of other additional requirements drop me a note. |
|
|
|
|
.SH "ALGORITHMIC COMPLEXITIES" |
|
|
|
@ -5191,8 +5395,9 @@ watchers becomes O(1) with respect to priority handling.
|
|
|
|
|
.IX Item "Processing signals: O(max_signal_number)" |
|
|
|
|
.PD |
|
|
|
|
Sending involves a system call \fIiff\fR there were no other \f(CW\*(C`ev_async_send\*(C'\fR |
|
|
|
|
calls in the current loop iteration. Checking for async and signal events |
|
|
|
|
involves iterating over all running async watchers or all signal numbers. |
|
|
|
|
calls in the current loop iteration and the loop is currently |
|
|
|
|
blocked. Checking for async and signal events involves iterating over all |
|
|
|
|
running async watchers or all signal numbers. |
|
|
|
|
.SH "PORTING FROM LIBEV 3.X TO 4.X" |
|
|
|
|
.IX Header "PORTING FROM LIBEV 3.X TO 4.X" |
|
|
|
|
The major version 4 introduced some incompatible changes to the \s-1API\s0. |
|
|
|
|