The defense counts the circuit failure rate for each guard for the past N
circuits. Failure is defined as the ability to complete a first hop, but not
finish completing the circuit all the way to the exit.
If the failure rate exceeds a certain amount, a notice is emitted.
If it exceeds a greater amount, a warn is emitted and the guard is disabled.
These values are governed by consensus parameters which we intend to tune as
we perform experiments and statistical simulations.
The warning message of validate_pluggable_transports_config() is
superseded by the changes in the warning message of
connection_or_connect() when the proxy credentials can't be found.
There is a bug causing busy loops in Libevent and infinite loops in
the Shadow simulator. A connection that is marked for close, wants
to flush, is held open to flush, but is rate limited (the token
bucket is empty) triggers the bug.
This commit fixes the bug. Details are below.
This currently happens on read and write callbacks when the active
socket is marked for close. In this case, Tor doesn't actually try
to complete the read or write (it returns from those methods when
marked), but instead tries to clear the connection with
conn_close_if_marked(). Tor will not close a marked connection that
contains data: it must be flushed first. The bug occurs when this
flush operation on the marked connection can not occur because the
connection is rate-limited (its write token bucket is empty).
The fix is to detect when rate limiting is preventing a marked
connection from properly flushing. In this case, it should be
flagged as read/write_blocked_on_bandwidth and the read/write events
de-registered from Libevent. When the token bucket gets refilled, it
will check the associated read/write_blocked_on_bandwidth flag, and
add the read/write event back to Libevent, which will cause it to
fire. This time, it will be properly flushed and closed.
The reason that both read and write events are both de-registered
when the marked connection can not flush is because both result in
the same behavior. Both read/write events on marked connections will
never again do any actual reads/writes, and are only useful to
trigger the flush and close the connection. By setting the
associated read/write_blocked_on_bandwidth flag, we ensure that the
event will get added back to Libevent, properly flushed, and closed.
Why is this important? Every Shadow event occurs at a discrete time
instant. If Tor does not properly deregister Libevent events that
fire but result in Tor essentially doing nothing, Libevent will
repeatedly fire the event. In Shadow this means infinite loop,
outside of Shadow this means wasted CPU cycles.
From what I can tell, this configuration is usually a mistake, and
leads people to think that all their traffic is getting proxied when
in fact practically none of it is. Resolves the issue behind "bug"
4663.
The function is not guaranteed to NUL-terminate its output. It
*is*, however, guaranteed not to generate more than two bytes per
multibyte character (plus terminating nul), so the general approach
I'm taking is to try to allocate enough space, AND to manually add a
NUL at the end of each buffer just in case I screwed up the "enough
space" thing.
Fixes bug 5909.
This feature can make Tor relays less identifiable by their use of the
mod_ssl DH group, but at the cost of some usability (#4721) and bridge
tracing (#6087) regressions.
We should try to turn this on by default again if we find that the
mod_ssl group is uncommon and/or we move to a different DH group size
(see #6088). Before we can do so, we need a fix for bugs #6087 and
Resolves ticket #5598 for now.
These stats are currently discarded, but we might as well
hard-disable them on bridges, to be clean.
Fix for bug 5824; bugfix on 0.2.1.17-rc.
Patch originally by Karsten Loesing.
Also, try to resolve some doxygen issues. First, define a magic
"This is doxygen!" macro so that we take the correct branch in
various #if/#else/#endifs in order to get the right documentation.
Second, add in a few grouping @{ and @} entries in order to get some
variables and fields to get grouped together.
This code shouldn't have any effect in 0.2.3, since we already accept
(and handle) data received while we are expecting a renegotiation.
(That's because the 0.2.3.x handshake _does_ have data there instead of
the renegotiation.)
I'm leaving it in anyway, since if it breaks anything, we'll want it
broken in master too so we can find out about it. I added an XXX023
comment so that we can come back later and fix that.